Comparison of two error distributions to determine "goodness of fit"

89 Views Asked by At

I am a physicist who is a few years out of doing his last course in statistics, so I am hoping to get some advice when comparing some data I recently generated.

The context is as follows. I have two slightly different theoretical models which I used to generate two sets of data (the numbers are of radiative strengths in multi-electron atoms for various transitions, but not exactly relevant for the mathematics). I also have a corresponding set of "accepted" or "literature" values for these transitions, and I would like to see which of my two models produces results that are closer overall/on average to the "accepted" model.

My current metric of comparison is relative error for each transition (i.e. $ \delta_{rel} = \frac{|T_{i,j}-A_j|}{A_j} $ where $T_{i,j}$ is my theoretical result for transition $j$ and theoretical model $i$ and $A_j$ is the accepted result for transition $j$). Seemed like the most obvious place to start. I am now stuck as to how to get some sort of meaning from these relative errors. Currently, I have just taken the arithmetic mean of the relative errors, but my intuition tells me this isn't a rigorous or correct way about this.

From what I can remember, each relative error can be considered to be an independent random variable. If I then take the square of the difference in the numerator of my relative difference, then I believe I should get a $\chi^2$ distribution. Thus, I would have two samples from two theoretical $\chi^2$ distributions, one for each of my models. How should I go about comparing these two distributions? My hypothesis testing knowledge is a bit rusty, and I can't remember ever comparing two $\chi^2$ distributions.

Any advice or indication of whether I am barking up the wrong tree would be appreciated.