chcecking consistency of estimator

83 Views Asked by At

Let $X_1,X_2,...,X_n$ be i.i.d $N(\mu,\sigma^2)$. I want to prove or disaprove consistency of estimator with variance parameter : $$\alpha_n:=\frac1n\cdot\sum_{i=1}^{n}(X_i-\bar{X})^2$$.

My work so far

Firsly i will use theorem which tells that if $X_1,X_2,...,X_n$ i.i.d $N(\mu,\sigma^2)$ and $S^2:=\frac{1}{n-1}\cdot\sum_{i=1}^{n}(X_i-\bar{X})^2$ then $\frac{(n-1)S^2}{\sigma^2}$ has a $\chi^2$ distribution with $n-1$ degrees of freedom..

So $Var(\frac{(n-1)S^2}{\sigma^2})=2(n-1)\Rightarrow Var(S^2)=\frac{2\sigma^4}{n-1}$.

But also i have the following idenepnce :

$\alpha_n\cdot \frac{n}{n+1}=S^2\Rightarrow Var(\alpha_n)\cdot\frac{n}{n-1}=Var(S^2)\Rightarrow Var(\alpha_n)=\frac{2\sigma^4}{n}$

So $Var(\alpha_n)\rightarrow0$. And I don't know what to do next. Can you give me some hint ?

2

There are 2 best solutions below

0
On BEST ANSWER

We need a lemma.

Let $(X_n)$ be a sequence of random variables such that $EX_n\to c$ and $\text{Var}(X_n)\to 0$. Then $X_n\to c$ in probability. (Here $c\in \mathbb{R}$ ))

Here is a proof. Fix $\varepsilon>0$ and note that $$ \begin{align} P(|X_n-c|>\varepsilon) &\leq P(|X_n-EX_n|>\varepsilon/2)+P(|EX_n-c|>\varepsilon/2)\\ &\leq\frac{4}{\varepsilon^2}\text{Var}(X_n)+P(|EX_n-c|>\varepsilon/2)\to 0 \end{align} $$ as $n\to\infty$ where we used Chebeshev's inequality in the second line and used the fact that $EX_n\to c$ implies $EX_n\stackrel{p}{\to} c$ (for sufficiently large n $P(|EX_n-c|>\varepsilon/2)=0$)$.\blacksquare$

Now apply the lemma to your problem taking $\alpha_n=X_n$. You've shown that $\text{Var}(\alpha_n)\to 0$ and moreover $E\alpha_n=\frac{n-1}{n}\sigma^2\to \sigma^2$ so $\alpha_n\to \sigma^2$ in probability.

So the estimator is consistent.

Appendix The assumption that the $X_i$ are normal is not necessary. I have written a solution outlining a solution below that avoids this assumption but uses more machinery.

Note that expanding the square one gets that $$ \alpha_n=\frac{1}{n}\sum_{i=1}^n X_i^2-\bar{X}^2 $$ At this point since the $X_i$ are i.i.d the strong law of large numbers (applied to the sequence $(X_i^2)$) tells us that $$ \frac{1}{n}\sum_{i=1}^n X_i^2\stackrel{a.s}{\to}EX_1^2=\mu^2+\sigma^2. $$ Similarly $\bar{X}\to \mu$ a.s. as $n\to \infty$ whence by continuity $\bar{X}^2\to\mu^2$. It follows that $$ \alpha_n=\frac{1}{n}\sum_{i=1}^n X_i^2-\bar{X}^2\stackrel{a.s}{\to}\mu^2+\sigma^2-\mu^2=\sigma^2 $$ as $n\to \infty$. In particular $\alpha_n\stackrel{p}{\to}\sigma^2$ as desired.

0
On

You can use the fact, that if you have an asymptotically unbiased estimator $\hat\alpha_n$ with $\lim \limits_{n \to \infty} Var(\hat\alpha_n) = 0$ then the estimator is consistent. This is a corollary which follows from properties of convergence in probability. So you just have to show that your estimator is asymptotically unbiased. If you need more help, just ask :-)