So this is probably really simple to anyone who's actually taken stats. I want to generate a number of sets of random numbers from 0 to 1, then find the standard deviations of the means.
I'm trying to find out if its worth computing the mean of a set of randoms or to just assume constant value since the SD would be too insignificant.
edit: i know i can simulate it but I'd really rather see the math behind it
You can compute the variance of a single random number as it is drawn from a uniform distribution on $(0,1)$. Then the variance of the sum is the sum of the variances, which is just multiplying the variance by the number of samples. The variance of the mean decreases as the inverse of the number of samples.