Prove that the variance of the MLE of $f(x\mid\theta)=θx^{θ−1}$ tends to $0$ for large $n$

815 Views Asked by At

I am given that $X_1,\ldots,X_n$ are randomly sampled from the following distribution: $f(x\mid\theta)=\theta x^{\theta−1}$, $0<x<1,$ $0<θ<∞.$

I need to show that the variance of the MLE of $θ$ tends to zero as $n$ tends to infinity.

I know that the MLE of $θ$ is $\hat{\theta} =\frac{-1}{(1/n)\sum_{i=1}^n \log X_i}$.

I am stuck on finding the variance of the MLE, however.

I've been thinking about solving $E(X^2) = \int x^2f(x)\,dx$ and $E(X) = \int xf(x) \, dx$ with the MLE as the argument, and rearranging to solve for $\operatorname{var}(\widehat{\theta})$, but I'm not sure of what $f(x)$ would look like using the MLE. It's a really hairy looking integral.

For instance, $E(\,\widehat{\theta^2}\,) = \int \left( \frac{-1}{(1/n) \sum_{i=1}^n\log X_i} \right)^2 \theta x^{\theta−1}\,dx$ would require that I uniformly replace $\theta$ with the MLE $\hat{\theta}$ ? Does this look right? Am I on the right/wrong track?

1

There are 1 best solutions below

2
On

If $X \sim f(x|\theta)$ it's pretty easy to show that $-\ln(X)\sim \exp(\theta)$ which makes $$\frac{1}{\hat{\theta}}=\frac{1}{n}\sum_{j=1}^n-\ln(X_j)\sim \text{Erlang}(n,n\theta)$$ From the law of the unconscious statistician, $$\mathbb{E}\Big(\hat{\theta}\Big)=\int_0^{\infty}\frac{1}{t}\cdot \frac{(n\theta)^nt^{n-1}e^{-n\theta t}}{(n-1)!}dt=\frac{n\theta}{n-1}$$

$$\mathbb{E}\Big(\hat{\theta}^2\Big)=\int_0^{\infty}\frac{1}{t^2}\cdot \frac{(n\theta)^nt^{n-1}e^{-n\theta t}}{(n-1)!}dt=\frac{n^2 \theta^2}{(n-1)(n-2)}$$ So finally, $$\mathbb{V}\Big(\hat{\theta}\Big)=\mathbb{E}\Big(\hat{\theta}^2\Big)-\Big(\mathbb{E}\big(\hat{\theta}\big)\Big)^2=\frac{n^2 \theta^2}{(n-1)(n-2)}-\Bigg(\frac{n\theta}{n-1}\Bigg)^2$$ which tends to $0$ as $n \longrightarrow \infty$.