Hypothesis test shows a significant difference in variances. Is the difference is big or small?

170 Views Asked by At

Two sample test of hypothesis for comparing two variances shows the following result:

Null hypothesis         σ(Param A) / σ(Param B) = 1
Alternative hypothesis  σ(Param A) / σ(Param B) ≠ 1
Significance level      α = 0.05

Statistics

                                          95% CI for
Variable            N  StDev  Variance      StDevs
Param A         47091  0.100     0.010  (0.100, 0.101)
Param B         47091  0.102     0.010  (0.101, 0.103)

Ratio of standard deviations = 0.982
Ratio of variances = 0.964

95% Confidence Intervals

                            CI for
         CI for StDev      Variance
Method       Ratio           Ratio
Bonett  (0.973, 0.991)  (0.947, 0.982)
Levene  (    *,     *)  (    *,     *)

Tests

                         Test
Method  DF1    DF2  Statistic  P-Value
Bonett    1      —      15.89    0.000
Levene    1  94180       3.15    0.076

Based on the - p-Value, I can conclude that there is a significant difference.

How do I conclude if the difference is significantly big or small?

I guess I am missing a theory here, I would appreciate if someone leads me to the right way of solving this problem.

1

There are 1 best solutions below

4
On

Assuming that the Bonett test is applicable, you are correct that the small P-value 0.000 (meaning $<0.0005$) indicates that the two population SDs differ "significaantly" (their ratio differs from 1).

The corresponding 95% CI says that $0.973 \le \frac{\sigma_1}{\sigma_2} \le 0.991,$ which indicates a small difference between the two population SDs. Notice that the CI does not contain $1.$ More directly, the sample SDs are $S_1 = 0.100$ and $S_2 = 0.102.$

The population variances in your computer output are rounded too much so that there does not appear to be any difference between them. If the sample SDs are correct to three places, then the sample variances are a little different: $S_1^2 = 0.0100$ and $S_2^2 = 0.0104.$

The very large sample sizes have enabled you to detect, as statistically significant, a very small difference between the two sample SDs.

Whether the small difference between $S_1 = 0.100$ and $S_2 = 0.102$ is of any practical importance is not really a statistical question. That is for the experimenters to decide, based on their knowledge of the data and what is being measured.

Addendum per Comments: If you are testing $H_0: \sigma_1^2/\sigma_2^2 = 1$ against $H_a: \sigma_1^2/\sigma_2^2 \ne 1$ at the 5% level, using $F = S_1^2/S_2^2$ as the test statistic, then you will reject if $F < 0.9868$ or $F > 1.0182$. For your data $F = 0.9612 < 0.9868,$ so you reject because $F$ is below the lower critical value. Below is output from R statistical software for $F$ based on the computer printout above and a computation of the critical values. I show output from software because F values for such large samples as yours are not explicitly printed in tables of the F distribution.

f = (.100/.102)^2;  f
## 0.9611688
qf(.025, 47090, 47090)   # lower critical value
## 0.982098
qf(.975, 47090, 47090)   # upper critical value
## 1.018228