I asked a question previously in another group but I did not get any response, can anyone from this group help me to solve this?
I have observed data and predicated data from different models. I did the reduced chi-square test between the observed and model data using the below equation:$$ \text{reduced } X^2 = \frac{1}{v} \sum\frac{(\text{obs-model})^2}{\sigma^2}, $$ where $v$ is the degree of freedom and $\sigma$ is the variance of the observed data.
I got the reduced chi-square value from $0$ to $500$, as we know that the reduced chi-square value no greater than $1$ indicates that the extent of the match between observations and estimates is in accord with the error variance. So we suggest the models which are within $0$-$1$ reduced chi-square value.
But I want to defined the confidence interval for the reduced chi-square. I want to find the models which are within $95\%$ confidence interval. Could anyone tell me how I can define the confidence interval for the reduced chi-square? And also what is the limit of reduced chi-square value to find the best model? And please share any literature related with reduced ch-square test.
Thanks in advance.
First of all I am not a statistician, and I could be wrong with notations and definitions. I just share some of my thoughts because I also came across this problem occasionally.
In some scenarios we calculate the reduced chi square for a "goodness of fit" rather than "minimizing chi square for best-fit parameters", and we want to exclude those fit with reduced chi square greater than some "chi_cut value". We hope there is a relation between this "chi_cut value" and the probability of reduced chi square < "chi_cut value". To find such a relation, we may need to go back to the mathematical basis of doing such a "goodness of fit" test.
There is an assumption (usually not a big one) here for this reduced chi square test: the measurement errors $\sigma$ are Gaussian. Then we can compute the likelihood of the fit by
$P = a\cdot \Pi_i \cdot \exp (- \frac{(obs_i-model)^2}{\sigma_i^2}) = a\cdot \exp{(-\Sigma_i \frac{(obs_i-model)^2}{\sigma_i^2})} = a\cdot \exp (-\nu \cdot\chi_\nu^2)$, ... ... (1)
where $a$ is some normalizing constant. In case of optimizing parameters, we want $P$ to be maximized and thus $\Sigma_i \frac{(obs_i-model)^2}{\sigma_i^2}$ minimized. In case of find the relation between "chi_cut value" and the probability, we can simply use Equation (1): Assuming $\sigma_i$ is reasonably measured, then roughly $\chi_\nu^2 = 1$ and $P_0 = a\cdot\exp (-\nu)$. If somehow we find $\chi_\nu^2 = 4$ in a fit, then $P_1 = a\cdot \exp (-4\nu)$. It seems reasonable to me to ignore the factor $\nu$ here, and define $P_0$ is the case where the model is good enough at the measurement noise of $\sigma_i$, while $P_1$ is the case where observations deviate $2\sigma$ from model. Similarly, $\chi_\nu^2 = 9$ means observations deviate $3\sigma$ from model, and so on. So, I'd prefer to use this "chi_cut value" vs "$n$ sigma" relation, $\chi_\nu^2 = n^2$, instead of calculating the probability $P$ where $\nu$ is involved. Personally I would rejected those with $\chi_\nu^2 > 4$ or $\chi_\nu^2 > 9$.