I ran into this paragraph in the introductory chapter on Taylor series of Morris Kline's "Calculus: An Intuitive And Physical Approach":
"Thus, if the value of $\sin(x)$ for a particular value of $x$ is needed to five decimal places, the mathematician will make certain that the error is indeed no greater than the quantity $0.000,005$"
I don't understand that. Let's say that the exact value of $sin(x)$ for a certain $x$, or any other function for that matter, is $0.836,229$. The exact value up to $5$ decimal places would then be $0.83,622$.
However, with an error of $+0.000,005$, the original value would be $0.836,234$ and therefore the value up to $5$ decimal places would end being $0.83,623$, which would be wrong.
Am I missing something or is there an error in the text?
The statement that $x=0.83623$ means that $0.836225\le x<0.836235$. Or, roughly speaking $\vert x-0.83623\vert<0.000005$.
This is five decimal place precision.