In a homework assignment, I am asked how many terms of a series are needed to obtain four decimal places (chopped) of accuracy.
There are certain tricks that I am supposed to use, where I can determine an upper limit of the error and select the number of terms to make it small enough. However, it seems to me like there are some special cases where this will not work.
If the number that the series approaches is, for example, $0.123400005$, the error term being less than $10^{-4}$ is not enough to guarantee that the first four digits are correct. If a finite sum gives $0.12339999$, the error is $1.5*10^{-8}$, which is way more accurate than my textbook tells me I need. But, the first four digits are not correct.
So, if I am not given what the number that the series converges to is, how can I be sure that this kind of scenario will not happen?
I think the way to handle this is to treat "accurate to $n$ decimal places" as meaning "differing from the correct value by no more than $0.5 × 10^{-n}$", or "having an error of no more than $5$ in the $n+1$th decimal place".
Otherwise there will be tuncation/rounding issues at certain points. For example, consider $1.234999999$ and $1.235000000$. If we round them to two decimal places, we get 1.23 and 1.24. If we truncate them to three decimal places, we get $1.234$ and $1.235$.
So you're right that in some instances, getting the specified number of correct digits can need much greater precision in the calculation. I don't see that you can guarantee it doesn't happen.
So I say the correct course of action is to define a maximum permitted error, and say you're using that as the criterion.