Prove: Consitency of the MLE estimator.

34 Views Asked by At

Question:
$\theta_n $ is the MLE estimator of an i.i.d sample $\left \{ X_1=x_1, X_2=x_2,...,X_n=x_n \right \}$ of size $n$ and $\theta_0$ the parameter to estimate.
Prove the consistency of the MLE estimator.

My answer:
1 - First let note that the sequence $(\theta_n)_n$ is a sequence of r.v. while $\theta_0$ is a parameter (a "constante").
So it is asking to us to prove: $\forall \epsilon>0 \Rightarrow \lim_{n \to \infty }\mathbb{P}(|\theta_n - \theta_0| \geq \epsilon)=0$

2 - In order to do that we will use the Markov Inequality with $r=2$ (the condition are verify by question hypothesis):
$\forall \epsilon>0 \Rightarrow \mathbb{P}(|\theta_n - \theta_0| \geq \epsilon) \leq \frac{E(|\theta_n-\theta_0|^2)}{\epsilon^2}=\frac{E((\theta_n-\theta_0)^2)}{\epsilon^2}=\frac{E(\theta_n^2+\theta_0^2-2\theta_n \theta_0)}{\epsilon^2}=\frac{E(\theta_n^2)+\theta_0^2-2\theta_0E(\theta_n)}{\epsilon^2}$

3 -

  • Here I am stuck as to finish the prove i would like to use the fact that $\theta_n$ converges in law to $\theta_0$ but how can i prove this last fact (without of course using the fact of the convergence of probability).
    (I ve tried to do it via the weak law of large number but $\theta_1, \theta_2,...$ have not the same law).
  • I have thought of this way to finish the prove:
    As we know that $\theta_n$ is unbiased when $n$ goes to infinity it means that: $\lim_{n \to \infty }E(\theta_n)-\theta_0=0 \Rightarrow \lim_{n \to \infty }E(\theta_n) = \theta_0$ and so Q.E.D But it needs to prove that the MLE is unbiaised for when $n$ goes to infinity.
    Plus after it helps me to prove that: $\lim_{n \to \infty }E(\theta_n^2)=\theta_0^2$ as $E(\theta_n)$ cvge in law to $E(\theta_0)=\theta_0$. So because $f(x)=x^2$ is a continuous function:$\lim_{n \to \infty }E(\theta_n^2)=E(\theta_0^2) = \theta_0^2$

If anyone as any other nice prove,i will be happy to read it.

Thank for your help.