For a school programming assignment, I am trying to compute the value of $\pi$ via the classic Monte Carlo estimation of $\pi$. In the experiment, we throw a variable number of darts at a circle that is inside a square. These darts are thrown at random $(x,y)$ points within the square. Knowing the number of darts in the circle and square, we can estimate the value of $\pi$.
I am given that the number of darts in the circle may be thought of as $\mathrm{Binom}[n, \pi/4]$, where:
mean $= \pi n/4$ and std. deviation $= \sqrt{\pi n/4(1- \pi/4)}$.
I'm then asked to run the experiment with 100 darts 1000 times and compute the std. deviation in the 1000 estimates of $\pi$. This is where I get confused. Given the formula for std. deviation how would I use it for 1000 values of $\pi$. The formula seems to only be looking for one value of $\pi$. Could anyone show an example calculation?
Define a computation as:
You are asked to make 1000 computations, and to compute the standard deviation of the sample consisting of the 1000 results of the computations.