At the London Science Museum we see a wonderful model example of Babbage’s Difference Engine.
It says that it generated polynomial coefficients to 30 decimal places for use in trigonometry and logarithm tables.
To me 5-10 decimal places seems to be enough for most problems. This seems like an unnecessary level of accuracy for the navigation and construction at the time.
My question is: Why did Babbage’s difference engine need to do polynomial coefficients to 30 decimal places?


We only need $39$ digits of $\pi$ to measure the circumference of a circle to the width of an hydrogen atom , yet we've calculated $\pi $ to a whopping 2 Quadrillion digits. Why ?Because we can and it is a good test to run to find the efficiency of any computing device.
I'm guessing this is the same case and the longer digits were calculated just because they could and also would have the by-product of improving the accuracy of calculations. But in reality the number of digits depends on the accuracy required and the problem in hand but to me $10 $ digits seems like overkill.