Compounding errors in order to get a well defined "reasonable" range for the end result.

218 Views Asked by At

I'm doing some data migration for a company, and there are some materials that are essentially "some things in aqueous solution." In a specification for this material this is defined as "Solids content, 36-40% ; Active content 30-32%" The Active content is a subset of the solids content.

Now, I need to enter this into a system hierarchically, i.e. the material 1 has x% solids content, and the solids themselves have y% active content.

If I were to take the "widest possible case" scenario (i.e. the lowest active content might still produce the highest solids content and vice versa), you would have 75%-88.89% Active content within the solid. If I took the narrowest case (i.e. the active content will only ever be maximised when the solids content is at its highest), you would have 80%-83.33%.

I believe that both of these approaches are too extreme. My non-scientific approach would be to eyeball it, and say, realistically, you would put something like 78%-86%, but I believe there is a more rigorous process that will allow me to produce a "reasonable" end range. I believe the correct approach would be to use the same way errors are usually propagated in experimental science, but it's been a long time since I did physics lab, and I'm unsure how I would apply this (and whether it would be valid).

1

There are 1 best solutions below

12
On

Standard error propagation in science assumes that the errors to be combined are uncorrelated, so that the deviation of one does not depend on the deviation of the others. In the case of uncorrelated errors, where you are trying to find the sum of two quantities with errors $Z=X+Y$, the errors add as squares, i.e., the variances add: $$\sigma^2_Z=\sigma^2_X+\sigma^2_Y$$ If there is a correlation, this can be incorporated as well: $$\sigma^2_Z=\sigma^2_X+\sigma^2_Y+2Cov(X,Y)$$ and if there is perfect correlation, the errors do indeed add.