Does $(\bar X-\mu_X)/\hat{\theta}$ approach $N(0,1)$ in distribution for any unbiased estimator $\hat{\theta}^2$ of $\sigma_{X}^2/n$?

150 Views Asked by At

Note: I've asked this question on "cross validated stackexchange" with no biters. I guess this question belongs here instead. (I've deleted my question there.)

In a basic statistics course we see CLT-like theorems appear for three cases of $\hat \theta$:

  1. For $\hat \theta=\frac{\sigma_X}{\sqrt{n}}$, this is classical CLT.
  2. For $\hat \theta=\frac{s}{\sqrt{n}}$, where $s$ is the sample variance.
  3. In the case of hypothesis testing, with $H_0: \mu_U=\mu_W$, let $X=U-W$. Then use $\hat \theta=\sqrt{\frac{s_1^2}{n}+\frac{s_2^2}{m}}$, where $s_1$ is the sample variance for $U$, and $s_2$ is the sample variance of $W$.

So this makes me wonder. Is the statement in the title of the question, which generalizes all of these cases, correct?

References (or counter-examples) would be ideal. If the statement in the title of the question incorrect, then what is the right generalization of CLT that captures cases 1-3 above?

2

There are 2 best solutions below

1
On BEST ANSWER

As you said, the classical CLT stating that

$$\sqrt{n} \frac {\bar{X} - \mu_X} {\sigma_X} \stackrel {d} {\to} \mathcal{N}(0,1)$$

For any consistent estimator $\hat{\theta}$ of $\sigma_X$, we have $$ \hat{\theta} \stackrel {p} {\to} \sigma_X$$

Then by Slutsky Theorem, $$\sqrt{n} \frac {\bar{X} - \mu_X} {\hat{\theta}} = \sqrt{n} \frac {\bar{X} - \mu_X} {\sigma_X} \frac {\sigma_X} {\hat{\theta}}\stackrel {d} {\to} \mathcal{N}(0,1)$$

So this is a common result which statisticians are using everywhere - using a consistent estimator to "replace" the parameter.

1
On

The answer to your title question is no.

Unbiasedness is a very weak condition. For instance for a sample $x_1,x_2,\ldots,x_n $ of size $n\ge 2,$ $$ \hat \sigma_\text{bad}^2 = \left(x_2-\frac{x_1+x_2}{2}\right)^2 + \left(x_1 - \frac{x_1+x_2}{2}\right)^2 $$ is an unbiased estimator of $\sigma_X^2$ but we can't expect it to converge in any way to the value of the parameter. Since this estimator will retain a lot of random fluctuations, it is a counterexample to your statement.

A better question is whether it is true for any consistent estimator. As BGM indicated in their answer that just appeared, the answer is yes.