The Chi squared test, Pearson's chi squared test, and Neyman's Restricted Chi squared test used the expected value as the denominator of the test statics
$$\sum_i\frac{(y_i-\hat y_i)^2}{\hat y_i}$$
However, on another page of reduced chis squared test, the denominator of the chi squared test itself was defied by variance,
$$\sum_i\frac{(y_i-\hat y_i)^2}{\sigma_i^2 }$$
where in the class, one learnt that it's "Pearson" if it used model's variance, and it's "Neyman" if it used data's variance.
But on another page of Chi-Square Test of Independence, the test statics for chi squared test used the expected value as the denominator again.
What's the denominator of chi squared test for the goodness of the fit? Why sometimes it's variance and sometimes it's expected value? What's the differences?
There's proof for the equivalence in normal distribution. What if $\hat y_i$ followed an unknown distribution?
Also, what if $\hat y_i<=0$ was allowed? Does it then require to take the absolute value of $\hat y_i$ at the denominator?