Find the maximum likely estimator of $\frac{1}{\sigma^n}e^{-\frac{\sum_{i=1}^{n}x_i-n\mu}{\sigma}}$

71 Views Asked by At

Find the maximum likely estimator of $L(\sigma,\mu)=\frac{1}{\sigma^n}e^{-\frac{\sum_{i=1}^{n}x_i-n\mu}{\sigma}}\mathbb{I}_{(-\infty,X_{(1)}]}(\mu)$

We differentiate $L$ and equate to zero to find the critical points. We start by differentiating with respect to $\sigma$ \begin{equation} \frac{\partial L(\sigma, \mu)}{\partial\sigma} = \frac{\sum_{i=1}^{n} x_i-n\mu}{\sigma^{n+2}} e^{-\frac{\sum_{i=1}^{n}x_i-n\mu}{\sigma}} -\frac{n}{\sigma^{n+1}}e^{-\frac{\sum_{i=1}^{n}x_i-n\mu}{\sigma}} = 0\Rightarrow \sigma = \frac{\sum_{i=1}^{n}x_i-n\mu}{n} \end{equation}

on the other hand if we differentiate with respect to $\mu$ it follows that \begin{equation} \frac{\partial L(\sigma, \mu)}{\partial\mu} = \frac{n}{\sigma^{n+1}}e^{-\frac{\sum_{i=1}^{n}x_i-n\mu}{\sigma}}=0 \end{equation}

However, in the derivative with respect to $\mu$ I don't get to anything in particular.

Another argument I was thinking about was that since $L$ is increasing and differentiable we could assume that the maximum value for $\sigma$ is found for $\min\{x_1,\ldots,x_n\}$ and for $\mu$ it is reaches at $\max\{x_1,\ldots,x_n\}$, that is, the maximum likelihood estimator is $(X_{(1)},X_{(n)})$.

However, in the derivative with respect to $\mu$ I don't get to anything in particular. Another argument I was thinking about was that since $L$ is increasing and differentiable we could assume that the maximum value for $\sigma$ is found for $\min\{x_1,\ldots,x_n\}$ and for $\mu$ it is reaches at $\max\{x_1,\ldots,x_n\}$, that is, the maximum likelihood estimator is $\left(X_{(1)},X_{(n)}\right)$. Can someone help me know which way to go?

1

There are 1 best solutions below

0
On BEST ANSWER

Taking $\ln$, we have the log-likelihood $l(\sigma, \mu) = -n\ln(\sigma) -\frac{1}{\sigma} \sum_{i = 1}^n (x_i - n\mu)$. Note that minimising $L$ is equivalent to minimising $l$. Hence, if $\tilde{\sigma}$, $\tilde{\mu}$ are the MLEs, then \begin{align*} \frac{\partial l(\tilde{\sigma}, \tilde{\mu})}{\partial \sigma} &= -\frac{n}{\tilde{\sigma}} + \frac{1}{\tilde{\sigma}^2} \sum_{i = 1}^n (x_i - n\tilde{\mu}) = 0\\ \implies \tilde{\sigma} &= \frac{1}{n} \sum_{i = 1}^n (x_i - n\tilde{\mu})\\ \frac{\partial l(\tilde{\sigma}, \tilde{\mu})}{\partial \mu} &= \frac{n^2}{\tilde{\sigma}} \end{align*} Hence, $l$ is increasing in $\mu$, so it reaches its maximum at $\tilde{\mu} = X_{(1)}$. Hence, we have \begin{align*} \tilde{\sigma} &= \frac{1}{n} \sum_{i = 1}^n (x_i - nX_{(1)})\\ \tilde{\mu} &= X_{(1)} \end{align*}