Show that the maximum likelihood estimator is consistent

99 Views Asked by At

Let $Y \sim \mathcal{B}(n,\theta)$. Show that the estimator $\hat{\theta}(Y) = Y/n$ is consistent. The hint says to use the Central Limit Theorem and the mean and variance formulas for the binomial distribution.

I have to show that the following expression is equal to zero:

$$ \lim_{n \to \infty} \mathbb{P}_\theta\bigg(\bigg\Vert \frac{Y}{n} - \theta\bigg\Vert > \varepsilon\bigg). $$

But I am unsure how to use the hint to continue from here. Any help would be appreciated.

1

There are 1 best solutions below

0
On

With $\hat \theta_n \mathrel {:=} Y/n$, here are three quick ways to solve this problem:

  • From the comments you already know that $\mathbb E_\theta[\hat \theta_n] = \theta$ and $\mathbb V_\theta[\hat \theta_n] = \theta (1-\theta)/n$ for all $\theta \in [0,1]$ and $n \in \mathbb N_{\geq 1}.$
    Therefore, $\mathbb E_\theta[(\hat \theta_n - \mathbb E_\theta[\hat \theta_n])^2] = \mathbb V_\theta[\hat \theta_n] = \theta (1-\theta)/n \overset{n \to \infty} {\longrightarrow} 0$, i.e., $\hat \theta_n$ converges in $L^2$ to $\theta$ as $n \to \infty$. Use the relation between convergence in $L^2$ and convergence in probability to finish the proof.
  • Represent $Y$ as $Y = \sum_{i=1}^n X_i,$ with $X_i \overset{\mathrm{i.i.d.}}{\sim} \mathrm{Bernoulli}(\theta)$ for all $n \in \mathbb N_{\geq 1}, i \in \{1, \ldots, n\}.$
    Then, use the weak LLN to conclude that $\hat \theta_n \overset{p}{\longrightarrow} \theta$ as $n \to \infty.$
  • If $\theta \in \{0,1\}$, then $\mathbb V_\theta[\hat \theta_n] = 0$ and $\hat \theta_n = \mathbb E_\theta[\hat \theta_n] = \theta$ almost surely. You can use the relation between almost sure convergence and convergence in probability to conclude that $\hat \theta_n$ is also weakly consistent.
    For $\theta \in (0,1)$, again, write $Y$ as $Y = \sum_{i=1}^n X_i,$ with $X_i \overset{\mathrm{i.i.d.}}{\sim} \mathrm{Bernoulli}(\theta)$ for all $n \in \mathbb N_{\geq 1}, i \in \{1, \ldots, n\}.$
    The Lindeberg-Lévy CLT gives $ \sqrt{n}\left(\hat \theta_n - \theta\right) \overset{d}{\underset{n \to \infty}\longrightarrow} \mathop{\mathcal N}\left(0, \theta\left(1-\theta\right)\right). $
    To show that this implies that $\hat \theta_n$ converges in distribution (or, equivalently, in probability) to $\theta$ as $n$ tends to infinity, you can rewrite $\hat \theta_n$ as follows $$ \hat \theta_n = \underbrace{\frac{1}{\sqrt{n}}}_{{\underset{n \to \infty}\longrightarrow}0} \underbrace{\sqrt{n}\left(\hat \theta_n - \theta\right)}_{\overset{d}{\underset{n \to \infty}\longrightarrow} \mathop{\mathcal N}\left(0, \theta\left(1-\theta\right)\right)} + \theta. $$ Now, use Slutsky's theorem to finish the proof: $$ \hat \theta_n \overset{d}{\underset{n \to \infty}\longrightarrow} 0 + \theta = \theta \iff \hat \theta_n \overset{p}{\underset{n \to \infty}\longrightarrow} \theta. $$