For a task I need to do, I'm finding the variance / standard deviation of a set of numbers that are between 0 and 1.
When I square the difference from the mean, the square will make the difference smaller (i.e. closer to 0) because it is less than 1. The square root of the sum will make the numbers larger again, but I'm worried that it will be too late... the square will already have made differences look smaller instead of larger.
Is that a problem? Is there a better way to measure the variability from the mean for these sets?
(FYI the problem I'm trying to solve is to find how "interesting" a sampled histogram is by measuring it's variability. Flat = low variability = boring.)
(Shortly I'll will have to judge how well two curves match by using the same calculation between the two curves instead of the mean. A low variability between two curves indicates a good "quality" of the match.)
Any ideas appreciated!
No, it is not a problem.
Indeed, one could try the following procedure to remedy the "problem" that the square makes differences smaller:
However, this gives the exact same result as just calculating the standard deviation of the original data. In my eyes, this shows that there is no real problem.