Consider the Leibniz formula for $\pi$ $$ \pi=4\sum_{n=1}^\infty\frac{(-1)^{n-1}}{2n-1}. $$ What is the minimum number of terms needed to calculate $\pi$ accurate to $k$ decimal places, in the sense that the first $k$ decimal places remain unchanged?
My attempt: Let's consider $k=2$ decimal places for example and set $a_n=\frac{4}{2n-1}$. One way to think about this is to consider the remainder of the series and simply use $$ |R_n|\leq a_{n+1}\Leftrightarrow |R_n|\leq \frac{4}{2n+1}\leq 10^{-2} $$ which holds for $n\geq 200$. Indeed, $$ \begin{align} R_{200}&=4\sum_{n=201}^\infty\frac{(-1)^{n-1}}{2n-1}\simeq 0.004999968751 \leq 10^{-2} \end{align} $$ However, the partial sums give $$ \begin{align} S_{200}&=4\sum_{n=1}^{200}\frac{(-1)^{n-1}}{2n-1}\simeq 3.136592685 \end{align} $$ which is not accurate to two decimal places ($\pi\simeq 3.14...$). In fact, the minimum value of $n$ I found (computationally) that gives an accuracy to two decimal places was $n=627$. Indeed, $$ \begin{align} S_{625}&\simeq 3.143192653\\ S_{626}&\simeq 3.139995211\\ S_{\mathbf{627}}&\simeq 3.143187549\\ S_{628}&\simeq 3.140000298\\ S_{629}&\simeq 3.143182478\\ &\vdots \end{align} $$ And for $n\geq 627$ we always get $3.14...$. Is there an analytical way of determining the minimum number of terms for any accuracy $k$? Note that this is not the same as asking what is the first term for which we get exactly $k$ accurate decimal places, which is a seemingly harder question being discussed here.
Thoughts: I feel a general answer might be hard and highly dependent on $\pi$, unless I am missing something. If this is not attainable, what is the best approximation? Solving $\frac{4}{2n+1}\leq 10^{-3}$ gives $n\geq 2000$, which does work, but could this estimate be refined? I fear critical cases where many $0$'s or $9$'s appear might be trickier to solve. How would one deal with those cases?
As you have noticed in your thoughts, without knowing the number you are approximating, you cannot be sure that a certain level of accuracy determined any particular digit accurately (your example of an arbitrary long string of zeros or nines shows this).
In numerical analysis, I believe the standard definition of "accurate to d decimal places" is that the difference between the true value, $x$, and the approximate value, $\tilde x$ have at least $d$ zeros in its decimal expansion (regardless of rounding to that decimal place), that is
$$\vert x - \tilde x\vert \leq 5\times 10^{-(d+1)}.$$