(mis)Understanding the reduced $\chi^2$ statistic

325 Views Asked by At

In chapter 8 of the book "Measurements and their Uncertainties" by Huge and Hase, they discuss the reduced chi squared statistic.

They define it as

$$ \chi^2_v = \frac{\chi_{\text{min}}^2}{v} $$

where $v$ is the number of degrees of freedom, and $\chi_{\text{min}}^2$ is the lowest mean square error of some data fitted to some model.

They write

In Section 8.4 we discussed how one would not be surprised if the observed value of χ2 was within 2σ of the mean for a min good fit, and that the null hypothesis should only be questioned if the value of χ2 was larger than, say, 3σ from the mean. As the standard deviation of the min χ 2 distribution depends on ν , the confidence limits for χν2 also depend on ν . In Fig. 8.5 the values of χν2 calculated at ν + σ, ν + 2σ, and ν + 3σ are plotted as a function of the number of degrees of freedom. Recall that for a Gaussian distribution these intervals correspond to the 68%, 95% and 99.7% confidence limits. The 3σ confidence limit of χν2 is tabulated for several values of ν in Table 8.1.

with the following being a capture of Fig. 8.5 and Table 8.1.

enter image description here

However, I don't get how the graphs and the tables are produced.

Naively, looking at the label of the graph one would think that

$$ \chi^2_v = 100+3\cdot 0.997 \neq 1.4 $$

for $v=100$.

Moreover, I don't see how you can compute a meaningful value for $\chi^2_v$ without knowing $\chi_{\text{min}}^2$.

I've tried, unsuccessfully, to reproduce table 8.1, but I don't understand how it's computed

import scipy.stats as st
v=100

# mean chi2v + chi2v value at 3 sigma
1+st.chi2.ppf(0.997,v)

# 144???

# maybe we should divide by v?
(1+st.chi2.ppf(0.997,v))/v
# 1.44 - seems right

# but that doesn't work for v=5 giving us 3.79/=2.9