In 1777 George-Louis Leclerc, Comte de Buffon says:
If we drop a needle onto a lined piece of paper, how likely is it to cross one of the lines? If the needle is shorter than the gap between the lines, the answer is 2/pi.
In 1812 Pierre-Simon Laplace then goes and says:
Then when we want to know something about a complex quantity, we can estimates its value by sampling from it.
Fast forward 206 years, I try to simulate throwing a needle on a ground in python 3.6 and try to estimate the value of pi with the following code:
import math
import random
iteration_count = 10000000
crossed = 0
needle_length = 1.0
gap_length = 2.0
for i in range(iteration_count):
drop_point = random.randint(-10, 10) / 10 # Needles center drops somewhere between -1 to +1 distance from any line
drop_degree_rad = random.randint(0, 157079632679) / 100000000000
tip = (math.sin(drop_degree_rad) * needle_length / 2) + drop_point
bottom = drop_point - (math.sin(drop_degree_rad) * needle_length / 2)
if math.fabs(tip) >= 1 or math.fabs(bottom) >= 1:
crossed += 1
print(crossed / iteration_count)
I expect to get a value close to 0.31830988618 which is 2 / pi / 2, since my gap length is twice my needle length. However the values I am getting are close to 0.343843, where I would derive pi to be: 2.90830408064, which is 8% off.
Obviously I am missing something, but what?
Well, the obvious hypothesis is that your "needles" are always landing centered on points of the form $\frac{x}{10}$. This is an issue - imagine an extreme example, where instead of using "random.randint(-10, 10) / 10" you used "random.randint(-1, 1) / 1". Then the needle crosses a line 2/3 of the time!
To get a decent approximation of $\pi$, you should use a finer distribution - try selecting drop_point using "random.randint(-1000,1000) / 1000", or an even larger number. Or even better - doesn't Python have a way to select a uniform random float?