How are errors computed?

38 Views Asked by At

Is there a standard method for computing errors? For example, ignoring for the moment distributions, if I have one value that is $10±1$ and another that is $20±1$, then if we multiply them, the value is $200+31-29$, or we have $30±1$ instead, then the result is $300+41-39$. Trying to multiply error distributions beyond this would obviously get complicated. But if I just want to know the min/max bounds of the resulting error, is there a method for that?

1

There are 1 best solutions below

0
On

This is propagation of uncertainties.

In practice, it is quite simple. Suppose $$C = A \times B \implies \log(C)=\log(A)+\log(B)$$ Differentiate to get $$\frac{dC}C=\frac{dA}A+\frac{dB}B\implies\frac{\Delta C}C=\frac{\Delta A}A+\frac{\Delta B}B\implies \Delta C=B \,\Delta A+ A\,\Delta B$$ and you can notice that this is almost the chain rule.

So, for your case $A=10\pm 1$ and $B=20 \pm 1$ $$\Delta(AB)=20\times 1+10 \times 1=30$$