How to prove that $\hat \theta_n \to_P \theta_0$ where $\theta_0$ is the true value of $\theta$?

121 Views Asked by At

Suppose $X_1,\ldots,X_n$ is a random sample from $\operatorname{Gamma}(\frac 1 2,\frac 1 \theta)$ distribution. $f(x_i)=\dfrac{x^\frac{-1} 2 e^{-\theta x_i}}{\Gamma(\frac 1 2 )(\frac{1}{\theta})^\frac{1}{2}}$

I've already found the M.L. estimator $\hat \theta_n = \dfrac 1 {2\bar X_n}$, but I don't really understand the meaning of the question. What does it mean by "the true value of $\theta$"? How should I approach this question? Thanks!

Edit: I think it is same to show that $\hat \theta_n$ is an unbiased estimator of $\theta_0$, but I can't get $E(\hat \theta_n)=\theta$ for some reason.

Edit: I figured it out. It is true that $E(\hat \theta_n)=\theta$, thus $\hat \theta_n$ is an unbiased estimator of $\theta_0$, which implies $\hat \theta_n \to_P \theta_0$.

1

There are 1 best solutions below

2
On BEST ANSWER

The expression $\dfrac{x^{-1/2} e^{-\theta x_i}}{\Gamma(1/2)(1/\theta)^{1/2} }$ is used in the process of finding the MLE, and that expression in itself is non-committal as to what the true value of $\theta$ is.

The statement that the "true value" of $\theta$ is $\theta_0$ just means that the distribution of each observation is $$\frac{x_i^{-1/2} e^{-\theta_0 x_i}}{\Gamma(1/2) (1/\theta_0)^{1/2}} \, dx_i = \frac 1 {\Gamma(1/2)}\cdot \left( \theta_0 x_i \right)^{-1/2} e^{-\theta_0 x_i} (\theta_0\,dx_i) \text{ for } x_i\ge0 .$$

I reached the same conclusion you did about what the MLE is in this case.

To show that $\hat\theta_n\to\theta_0$ in probability as $n\to\infty,$ you will probably want to use Chebyshev's inequality. You will need to know $\operatorname{var}(\hat\theta_n).$

My first hasty computation of $\operatorname{E}\left( \dfrac 1 {2 \overline X_n} \right)$ is telling me that it is $\dfrac{\theta_0 n}{n-2},$ so $\hat\theta_n$ is not unbiased, but is asymptotically unbiased.

Your claim that the mere fact that $\hat\theta_n$ is unbiased is enough to lead to the conclusion that it converges in probability to $\theta_0$ is wrong.

I'm getting $\operatorname{var}(\hat\theta_n) = \dfrac{2n^2\theta^2}{ (n-2)^2(n-4)} \to 0 \text{ as } n\to\infty.$

The mean squared error of $\hat\theta_n$ as an estimator of $\theta_0$ is $$ \operatorname{E}( (\hat\theta_n - \theta_0)^2) = \text{mean squared error} = \text{variance} + (\text{bias})^2 = \cdots. $$ Work that out and show it approaches $0$ as $n$ grows, and then use Chebyshev's inequality.