Standard deviation shouldn't reduce?

80 Views Asked by At

I need to do a algorithm to calculate an integral via Monte Carlo Method, and for a purpose of simulation, I need to calculate the standard deviation of a sample generated in my program. My problem is that, when I am increasing the number of elements of my sample, my standard deviation does not decay, as I should expect. First I thought that my function was wrong, but using the numpy pre defined function to calculate the standard deviation, I saw that the values were the same and it was not decreasing as I expected. So I wondered that what was wrong was my sample, so I made the following simulation to test if the standard deviation was decreasing as it should do:

list = [random.uniform(0,1) for i in range(100)]

print np.std(list)

the standard deviation obtained: 0.289

list = [random.uniform(0,1) for i in range(1000) ]

print np.std(list)

the standard deviation obtained: 0.287

Shouldn't this decrease more while my n increases? Because I need this to use as stopping criterion in my simulation, and I was excepcting this to decreases with a bigger sample. What is wrong with my mathematical concept?

Thanks in advance!

1

There are 1 best solutions below

0
On

I think you are confusing the standard deviation of the samples ($\sigma=1/\sqrt{12}=0.288675...$ for a uniform distribution) with the standard deviation of averages of $N$ independent samples ($\sigma/\sqrt N$).

What you get is an estimate of $\sigma$, with more confidence in the second case (as the variance of the estimator is smaller).