When does the variance of a consistent estimator go to zero?

4.4k Views Asked by At

I came across the following statement (marked as true) in multiple-choice section of an old exam:

The variance of a consistent estimator goes to zero with the growing sample size.

As far as I can tell, it can be translated as

Convergence in probability to a constant implies convergence in $L^2$.

Which is clearly false.

Is there a way to repair the statement? I mean maybe the professor forgot to mention some additional assumption typical for the context. (E.g. how convergence in probability and uniform integrability together imply $L^1$ convergence, but it seems to be irrelevant for the above statement.)

1

There are 1 best solutions below

3
On

There are different concepts of consistency, knitted to different convergence concepts. You have almost-sure consistency, consistency in probability (defined by convergence in probability), $L^2$-consistency, and so on.

EDIT

The claim is not true, using the definition by convergence in probability. Maybe it is true by some stronger definition. I will give a (somewhat artificial) counterexample: Let $$ \hat{\theta}_n = \begin{cases} \theta ~~\text{with probability $\frac{n-1}{n}$}\\ \theta+n ~~ \text{with probability $\frac1n$} \end{cases} $$ then we have convergence in probability, but also $E(\hat{\theta}_n-\theta)^2 = 0\cdot \frac{n-1}{n} + n^2\cdot \frac1n=n \rightarrow \infty$ disproving the claim.