I'm comparing several measurements to a standard sample, and I would like to calculate the Z score in order to quantify the gravity of the wrong measurements. For example, I've got this data:
a/b
2.20
2.20
2.20
2.21
2.21
2.20
2.20
2.20
2.20
2.20
2.20
2.20
2.20
2.21
2.20
2.20
2.20
2.20
2.20
2.20
2.19
2.21
2.21
2.21
I've calculated mean (2.20), standard deviation(0.005) and the standard sample is equal to 2.17. In conclusion the Z Score, for a random measurement, would be (2.19 - 2.17)/0.005 = 4and (2.20 - 2.17)/0.005 = 6 for the entire measured sample.
Why do I get a Z score so high, when the values are not so different?
For example, I've found this on the internet
Where the Z score is equal to 0.4 for +0.1 compared to the Standard Sample.
If you plot and look closely at your data, you will understand what's going on. I have plotted mean by the red vertical and z= 1 on both sides by the green vertical lines.
Since, your data has very low variance, even a small deviation from the mean, results in a very high z-score.
You can also visualize this from the formula itself, the numerator is the distance from mean, and the denominator is scaling factor. So small variance in the underlying data would find any new data (even slightly away from mean) as highly surprising (or high z score).