(from Jwan622's post here),
This image was shown, where we've got:
- 19 samples (n = 19)
- Mean of 4.4.
- Sample standard deviation of 2.3 (s = 2.3)
If the standard error of mean from the sample is SE = s / sqrt(n) = 2.3 / sqrt(19), and if I made one more measurement that had a value of 2, what would be the estimation of random error in that single measurement with 95% confidence?
Would that estimation be:
2 +/- (1.96 * [s + SE])?
The reason I ask is that a book here (2016, Morris, Measurement and instrumentation, 2nd edition) claims this is the case, but I was unable to independently verify it anywhere else.