Measuring $\pi$ by throwing darts

1.4k Views Asked by At

I want to give an approximation of $\pi$ in this way: I inscribe a circle in a square then I throw darts at random on the square from far away. If the darts falling on the square are $n$ and the darts falling on the circle are $m<n$ I approximate $\pi$ with $4 \frac{m}{n}$.

Suppose I want an approximation such that $|4\frac{m}{n}-\pi|<0,0001$.

How can I quantify how many shots I have to do at least (before doing the experiment)?

I know that the strong law of large numbers tells me that $P(\lim_{n\to\infty} 4\frac{m}{n}=\pi)=1$, but I can't do an infinite number of shots so I try with the weak law: $\lim_{n\to\infty}P(|4\frac{m}{n}-\pi|<0,0001)=1$. Again, this seems to be unhelpful to my cause.

Maybe I could take the compromise that I want an approximation such that $|4\frac{m}{n}-\pi|<0,0001$ with a probability greater than $0,95$, but even if I succeed in this goal, nothing assures me that in $5\%$ of cases, the approximation obtained is such that $|4\frac{m}{n}-\pi|>3$.

What your approach to this problem would be?

2

There are 2 best solutions below

0
On

I guess the dart shots are independent. Then, every dart falls within the circle with probability $\frac{\pi}{4}$.
The probability that the random variable $m$ takes value $k$ after $n$ shots is $P(m(n) = k) = \binom{n}{k} (\frac{\pi}{4})^k (1 - \frac{\pi}{4})^{n-k}$.

Now the probability that $4 \frac{m}{n} \leq \pi + \varepsilon$ is the same as the probability that $m \leq \lfloor \frac{n(\pi + \varepsilon)}{4} \rfloor$. Symmetrically, the probability that $4 \frac{m}{n} \geq \pi - \varepsilon$ is the same as the probability that $m \geq \lceil \frac{n(\pi - \varepsilon)}{4} \rceil$. Thus, the probability to have $|4\frac{m}{n} - \pi| \leq \varepsilon$ is given by:
$Q(n) = P(|4\frac{m}{n} - \pi|) = \sum_{k = \lceil \frac{n(\pi - \varepsilon)}{4} \rceil}^{\lfloor \frac{n(\pi + \varepsilon)}{4} \rfloor} \binom{n}{k} (\frac{\pi}{4})^k (1 - \frac{\pi}{4})^{n-k}$.

You cannot be sure to obtain the desired precision after $n$ shots, but you can always compute $n$ so that $Q(n) \geq 1-\eta$ for any $\eta$ you deem acceptable. This will give you an estimate of $n$.

2
On

When dealing with a real experiment, the number n of darts landed in the square will always be finite and therefore limit theorems won't be directly applicable. You cannot guarantee the size of the error, but you can find large enough n for the probability of a small error to be high. So the probability of successfully getting a good estimate is dependent entirely on the desired size of error and the number n. It cannot be 100% as indeed all of any n darts can miss, giving a bad estimate for any chosen error size less than π itself.

Boson pointed out that since the darts are randomly thrown at the square and not aimed anywhere in particular, their position would be uniformly distributed and therefore the number of darts $m$ that hit the circle would be binomially distributed with probability of success $p=\frac{π}{4}$ (and $n$ number of experiments). Since you are looking to get small errors and will be needing to throw a lot of darts, $\quad\mathrm{Bi}(n,p)\ni m\;\approx\; m\in\mathrm{N}(np,\sqrt{np(1-p)}),\quad p=\frac{\pi}{4}$.
Then for the total error of the estimate we have $\quad\mathrm{err}(m,n,p)=4\frac{m}{n}-4p\in \mathrm{N}(0,4\sqrt{\frac{p(1-p)}{n}})$,
which is the same as the central limit theorem implies because $\frac{m}{n}$ is indeed the average of $n$ variables of Bernoulli distribution with probability $p$.

So with large $n$ we can lower the variance of the error, making the probability of a large error smaller. In particular, if we do not know the value of $\pi$ (and $p$) beforehand, we should note that $\underset{0<p<1}{\mathrm{max}}\sqrt{p(1-p)}=\sqrt{{\frac{1}{2}(1-\frac{1}{2})}}=\frac{1}{2}$ and so the highest variance we may have is $\frac{2}{\sqrt{n}}$. Then for every $c>0$ and $0<\alpha<1$ we can calculate $n$ so that $\mathrm{P}(|\mathrm{err}(n)|\leq c)\geq \alpha\,,\quad\mathrm{err}(n)\in\mathrm{N}(0,\frac{2}{\sqrt{n}})$.

In particular, for the error to be smaller than 0.0001 with probability greater than 94.7%, you need 1.5 billion darts to hit the square. Going to 3 billion, you are more than 99.3% certain to get a good estimate. At this point it may be more probable to hit the actual π than get a significantly large error (joke). However, these numbers are indeed over the top as they would give a good estimate of π if the actual π was any number between 0 and 4. If, for example, we additionally know $\pi>3$, then we can take $p=\frac{\pi}{4}>\frac{3}{4}$ and we will have $\mathrm{Var}(\mathrm{err})<\frac{\sqrt3}{\sqrt{n}}<\frac{2}{\sqrt{n}}$, so we will need smaller $n$ for the same $c$ and $\alpha$. For greater than 94.4% probability the error is less than 0.0001, knowing π>3, we need just 1.1 billion darts, which would only make us less than 90.3% certain the estimate is good if we didn't know 3<π<4.

It is interesting to note that if we began the test, falsely thinking π=2, we would need to throw a bunch of darts before surely noticing the error is not small at all, hinting us the actual value of π. But if we decide π=0 or π=4 so that the error has 0 variance, then theoretically we need only 1 dart to find π. Having done these calculations, after throwing a dart and hitting the circle we conclude π=4 or if we miss the circle we say π=0. This shows that the test is constructed in such a way that if we falsely assume a wrong value of π we can be led to accepting it after not throwing enough darts to show otherwise! Thankfully, because the actual π is far from 0 and 4, if we make the correct assumptions or we don't make any at all, a good number of darts is sure to give a good estimate.

Looking at it from a statistical point of view, for every $m$ darts in the circle out of $n$ shot in the square and hypothesis $\;H_c:\pi=c\,,0\leq c\leq4$ , you can find the $\text{p-value}(m,n,c)=$
$=2\,\mathrm{min}\{\mathrm{P}(z\leq\mathrm{err})\,,\,\mathrm{P}(z\geq\mathrm{err})\quad|\quad p=\frac{c}{4}\enspace,\enspace z\in\mathrm{N}(0,4\sqrt{\frac{p(1-p)}{n}})\}$, i.e. $\text{p-value}(m,n,c)$ is the probability to observe a larger error than the error of the particular $m$ and $n$ if $\pi=c$. Then every time you throw $n$ darts in the square and $m$ land in the circle, $\hat c$ that satisfies $\text{p-value}(m,n,\hat c)=\underset{0\leq c\leq4}{\mathrm{max}}\{\text{p-value}(m,n,c)\}$ is that value for $\pi$ for which the hypothesis test for the expected value of the error to be 0 would pass at the highest level of significance, i.e. $\hat c$ is such that if $\pi=\hat c$, then the probability of observing a larger error than the observed in the experiment is the highest when compared to all other possible values of $\pi$ and therefore the probability of observing a smaller error than the observed is the lowest, making $\hat c$ the most probable value for $\pi$ considering we have exactly $m$ darts in the circle out of $n$ in the square. Of course, this will only have good results for large enough $n$.