When discussing with my son a few of the many methods to calculate the digits of $\pi$ (15 yo school level), I realized that the methods I know more or less (geometric approximation, Monte Carlo and basic series) are all convergent but none of them explicitly states that the $n$-th digit calculated at some point is indeed a true digit (that it will not change in further calculations).
To take an example, the Gregory–Leibniz series gives us, for each step:
$$ \begin{align} \frac{4}{1} & = 4\\ \frac{4}{1}-\frac{4}{3} & = 2.666666667...\\ \frac{4}{1}-\frac{4}{3}+\frac{4}{5} & = 3.466666667...\\ \frac{4}{1}-\frac{4}{3}+\frac{4}{5}-\frac{4}{7} & = 2.895238095... \end{align} $$
The integer part has changed four times in four steps. Why would we know that $3$ is the correct first digit?
Similarly in Monte Carlo: the larger the sample, the better the result but do we mathematically know that "now that we tried [that many times], we are mathematically sure that $\pi$ starts with $3$".
In other words:
- does each of the techniques to calculate $\pi$ (or at least the major ones) have a proof that a given digit is now correct?
- if not, what are examples of the ones which do and do not have this proof?
Note: The great answers so far (thank you!) mention a proof on a specific technique, and/or a proof that a specific digit is indeed the correct one. I was more interested to understand if this applies to all of the (major) techniques (= whether they all certify that this digit is guaranteed correct).
Or that we have some which do (the ones in the two first answers for instance) and others do not (the further we go, the more precise the number but we do not know if something will not jump in at some step and change a previously stable digit. When typing this in and thinking on the fly, I wonder if this would not be a very bad technique in itself, due to that lack of stability)
I think the general answer you're looking for is:
Yes, proving that a method for calculating $\pi$ works requires also describing (and proving) a rule for when you can be sure of a digit you have produced. If the method is based on "sum such-and-such series", this means that one needs to provide an error bound for the series. Before you have that, what you're looking at is not yet a "method for calculating $\pi$".
So the answer to your first question is "Yes; because otherwise they wouldn't count as techniques for calculating $\pi$ at all".
Sometimes the error bound can be left implicit because the reader is supposed to know some general theorems that leads to an obvious error bound. For example, the Leibniz series you're using is an absolutely decreasing alternating series, and therefore we can avail ourselves of a general theorem saying that the limit of such a series is always strictly between the last two partial sums. Thus, if you get two approximations in succession that start with the same $n$ digits, you can trust those digits.
(The Leibniz series is of course a pretty horrible way to calculate $\pi$ -- for example you'll need at least two million terms before you have any hope of the first six digits after the point stabilizing, and the number of terms needed increases exponentially when you want more digits).
In other cases where an error bound is not as easy to see, one may need to resort to ad-hoc cleverness to find and prove such a bound -- and then this cleverness is part of the method.