Maximum likelihoof estimator of $\mu$ if $X_1\sim N(\mu,1)$

130 Views Asked by At

Question: Let $X_1, . . . , X_n$ be a random sample with $X_1 \sim N(\mu, 1)$ where $\mu \geq 0$. Derive the maximum likelihood estimator of $\mu$.


I have some difficulties understanding the question.

I already calculated the MLE of $\mu$ when $X_1,...,X_n$ are a random sample from $N(\mu,\sigma^2)$. In that case the MLE of $\mu$ is $\hat{\mu} = \bar{X}$.

Is it the same case here? Or does the $\mu\geq 0$ changes anything? Or maybe the question states that $X_2,...,X_n\sim N(\mu, \sigma^2)$? I'm not that familiar with statistic jargon.

2

There are 2 best solutions below

3
On BEST ANSWER

Knowing that $\sigma^2 =1$ slightly simplifies the calculation of the log-likelihood and the maximum likelihood estimator of $\mu$. But that is only part of the answer.

If $\bar X \le 0$ then knowing $\mu \ge 0$ means using $\hat \mu = \bar X$ would have zero likelihood. In such a case, you can do the analysis, but it is intuitively obvious that $\hat \mu = 0$ would maximise the likelihood. If you need to combine the results then you could say the maximum likelihood estimator is $$\hat \mu = \max(0, \bar X).$$

0
On

For a sample $\boldsymbol x = (x_1, \ldots, x_n)$, the likelihood function is $$\mathcal L(\mu \mid \boldsymbol x) \propto \prod_{i=1}^n e^{-(x_i - \mu)^2/2} = \exp \left(- \frac{1}{2} \sum_{i=1}^n (x_i - \mu)^2 \right).$$ Thus the log-likelihood is $$\ell(\mu \mid \boldsymbol x) = -\frac{1}{2} \sum_{i=1}^n (x_i - \mu)^2, \tag{1}$$ which, being the sum of squares, is always nonpositive. Thus its maximum is attained when $\sum (x_i - \mu)^2$ is minimized. But we may write $$\begin{align} \sum_{i=1}^n (x_i - \mu)^2 &= \sum_{i=1}^n (x_i - \bar x + \bar x - \mu)^2 \\ &= \sum_{i=1}^n (x_i - \bar x)^2 + 2(x_i - \bar x)(\bar x - \mu) + (\bar x - \mu)^2 \\ &= \sum_{i=1}^n (x_i - \bar x)^2 + 2(\bar x - \mu) \sum_{i=1}^n (x_i - \bar x) + \sum_{i=1}^n (\bar x - \mu)^2 \\ &= \left(\sum_{i=1}^n (x_i - \bar x)^2 \right) + n(\bar x - \mu)^2, \tag{2} \end{align}$$ since $$\sum_{i=1}^n x_i = n \bar x$$ by definition implies $$\sum_{i=1}^n (x_i - \bar x) = 0.$$

Thus, $(2)$ is the sum of a term that is constant with respect to $\mu$, and a term that varies with $\mu$ in relation to a fixed value $\bar x$. If we can choose $\mu = \bar x$, this is the global minimum of $(2)$. However, if $\bar x < 0$, then such a choice is disallowed. In such a case, we write

$$(\bar x - \mu)^2 = (\mu + (-\bar x))^2$$

and since $-\bar x > 0$ and we require $\mu \ge 0$, it becomes obvious that $\mu = 0$ is the choice that minimizes $(\mu + (-\bar x))^2$.

Therefore, the (restricted) maximum likelihood estimator is given by $$\hat \mu = \max(0, \bar x).$$