MLE of $ \theta$ in $ N(\theta, \theta^2) $ and Asymptotic Distribution of $\hat{{\theta}}_{\text{MLE}}$

124 Views Asked by At

Question
Let $( X_1, X_2, \ldots, X_n )$ be an independent random sample from $N(\theta, \theta^2)$ where $ \theta \neq 0 $.

Find the MLE for $\theta$ and find the asymptotic distribution of the MLE

Approach
$ L(X_1, X_2, \ldots, X_n; \theta) = \prod_{i=1}^{n} \left( \frac{1}{\sqrt{2\pi\theta^2}} \right) \exp\left\{-\frac{(x_i - \theta)^2}{2\theta^2}\right\} = \frac{1}{(\theta \sqrt{2\pi})^n} \exp\left\{-\frac{1}{2\theta^2} \sum_{i=1}^{n} (x_i - \theta)^2\right\} $

I know that I need to minimize this function but I'm just not sure how to proceed for this one.

Leads
https://stats.stackexchange.com/questions/369417/mle-of-theta-in-n-theta-theta2
This post provides the following MLE for theta:
$\hat{{\theta}}_{\text{MLE}} = -\frac{\bar{X}}{2} + \sqrt{\frac{\sum_{i=1}^{n} X_i^2}{n} + \frac{\bar{X}^2}{4}} $
I understood their method but since MLE is not unique and I am not sure how to find the asymptotic distribution here, I am unsure whether I should find another MLE of theta somehow or I'm just being dumb and cannot make out the asymptotic distribution here.

This is what ChatGPT gave me (supercharged with the Wolfram Plugin):
Given the likelihood function:
$ L(X_1, X_2, \ldots, X_n; \theta) = \frac{1}{(\theta \sqrt{2\pi})^n} \exp\left\{-\frac{1}{2\theta^2} \sum_{i=1}^{n} (x_i - \theta)^2\right\} $

The log-likelihood function is:
$ \ell(\theta) = \log L(X_1, X_2, \ldots, X_n; \theta) $
$ \ell(\theta) = -n \log(\theta \sqrt{2\pi}) - \frac{1}{2\theta^2} \sum_{i=1}^{n} (x_i - \theta)^2$

Differentiating the log-likelihood with respect to $\theta$:
$\frac{d\ell(\theta)}{d\theta} = -\frac{n}{\theta} - \frac{\sum_{i=1}^{n} (2\theta - 2x_i)}{2\theta^2} + \frac{\sum_{i=1}^{n} (x_i - \theta)^2}{\theta^3}$

The second derivative is:
$\frac{d^2\ell(\theta)}{d\theta^2} = \frac{2 \sum_{i=1}^{n} (2\theta - 2x_i)}{\theta^3} - \frac{3 \sum_{i=1}^{n} (x_i - \theta)^2}{\theta^4}$

The Fisher Information, $I(\theta)$, is:
$I(\theta) = \frac{6n}{\theta^2}$

The asymptotic variance of the MLE is:
$\text{Var}(\hat{\theta}_{\text{MLE}}) \approx \frac{1}{I(\theta)} = \frac{\theta^2}{6n}$

Thus, the asymptotic distribution of $\hat{\theta}_{\text{MLE}}$ is:
$ \hat{\theta}_{\text{MLE}} \sim N\left(\theta, \frac{\theta^2}{6n}\right)$

However, I am skeptical of this approach because I have never seen Fischer Information used like this, I thought that it only provides a lower bound for the Variance as per the Cramer-Rao Theorem