significant figures in averaging samples

2.7k Views Asked by At

I can't seem to find anything about this but I thought that for every 10 samples (of the same thing) that you averaged together you gained 1 significant figure. You'd maybe need 100 samples to gain 2 figures.

I'm talking about the context where maybe a computer or pulse generator is firing a pulse laser in a loop and you're looking at a change in absorbance or fluorescence with a photodiode or photomultiplier tube. You see a trace on a digital storage scope and you average hundreds or thousands of them for improved accuracy.

I used to work for chemists, but my job was electronics to their specifications. I built the amplifier for the photodiode, interfaced the storage scope to computers. I know improving accuracy was the purpose of averaging, don't remember how the significant figures worked out. It was also 30+ years ago.

1

There are 1 best solutions below

3
On

I think the statistical fact you need to know in this case is that if $X_1, X_2, X_3, \dots , X_n$ are independent, identically distributed random variables, each with standard deviation $\sigma$, then the standard deviation of their mean, $\overline{X}$, is $\sigma / \sqrt{n}$. So if you have some estimate of the error in each of your $n$ temperature readings, then just divide that error estimate by $\sqrt{n}$ to find the error in the average.

In terms of significant digits, this means that in order to get one more significant digit in your results, you need 100 times as many samples as in your original data. If you want two more significant digits, you need to increase your number of samples by a factor of 10,000.

All this assumes that your readings are in fact independent and unbiased. If that assumption is false, all bets are off.

Reference: An Introduction to Error Analysis: the Study of Uncertainties in Physical Measurements, Second Edition by John R. Taylor.