I have got a series of physical measures that is normally distributed. What does this imply as to the trustworthiness of the person who did them? Does this imply that the measures were honestly taken and that I don't need to do them again?
what has been measured is the speed of ultrasonic waves in concrete. I have got 80 values.
Not all physical measures are normally distributed.
If you think the measure should be normally distributed and he data pass a normality test (such as Shapiro-Wilk, or Anderson-Darling), then that might indicate an honest selection and measurement procedure.
Or, by contrast, possibly this might only indicate that that the person who 'generated' the data knows how use use pseudo-random numbers to generate normal variates. For example, in R statistical software
round(rnorm(80, 100, 15))will get you 80 pretty convincing IQ scores for a educational psychology 'experiment' without the bother of dealing with any subjects. (Old consulting statisticians' saying, "67.38% of all statistics are made up on the spot.")If in doubt, I would never take approximate normality alone as an indication of honest data collection.
More to the point, if you seriously question the validity of the data collection process, you might do some tests of randomness of the data in the order presented. For example, test of number of 'runs' above and below the mean, look at a 'control chart' of the data, or check several lags of autocorrelation for values significantly different from 0.
You might also compare the distribution, mean, and SD against values from a trusted experiment measuring the same physical properties. And you might ask to see the original lab notes and spreadsheets.
Finally, I view this as a serious question about the authenticity of data. And you are correct not to see any connection between this question and the CLT.