Precision and roundoff error when calculating logarithms by hand

187 Views Asked by At

In the era before computers, hundreds of lifetimes were spent tabulating values of transcendental functions by hand. Logarithm tables stand as the most obvious example, but similarly massive trigonometric, log-trigonometric, and other tables existed. The first base-10 logarithm table compiled by Henry Briggs had tens of thousands of entries accurate to 14 digits of precision. In the 17th century. By hand. It's crazy.

I am struggling to understand how the first logarithm tables were compiled without runaway errors. Even if each multiplication, square root, etc. of the computation steps could be done perfectly and without an occasional arithmetic error, humans are still limited by the number of digits they can write on a piece of paper. There is still round-off error, similar to with fixed-point arithmetic in modern computers. And just like in a modern computer, choosing the wrong algorithm can cause errors to creep in and explode, yielding garbage.

John Napier apparently spent 20 years of his life computing the series

$$p_{n+1} = p_n (1 - 10^{-7})$$

for $n \in [0, 10^7]$. What results is related to a logarithm table. I am just shocked that: (1) this was done by hand and generated a useful result, and (2) that someone as intelligent as Napier decided to devote 20 years to this, without first knowing that the computation was going to work out in the end. Usually when I take repeated products of the same number, errors start to accumulate.

Can someone show that even after rounding-off the intermediate $p_{n}$'s to a fixed number of digits that Napier's method is stable, in the sense that the average error from the true value doesn't grow after each multiplication?

I must be missing something here!