Interesting task on MLE estimators

64 Views Asked by At

Let $X_1,\ldots, X_n$ be the observations, such that $X_i = e^\xi$, where $\xi \sim U[0, \theta]$ (uniform distribution). Find MLE for $\theta$ and check it for consistency.

Firstly, i found joint density function:

let $\phi(x) = e^x, \eta = \phi(\xi)$. Than, $P(\eta \le x) = P(\xi \le \phi^{-1} (x)) \Rightarrow $ $$p_\eta (x) = \frac{p_\xi(\phi^{-1}(x)}{\phi'(\phi^{-1}(x)}$$ which means that $p_{X_i}(x) = \frac{1}{\theta x}I(\ln x \in [0,\: \theta])$. As a result, $$L = \frac{1}{\theta^n} \prod\limits_{i=1}^{n}\frac{1}{X_i}$$

however, i cant use simple derivative method since $$\frac{\partial}{\partial \theta} \ln L = \frac{\partial}{\partial \theta} \big[ -n\ln \theta - \sum\ln X_i \big] = -\frac{n}{\theta} \neq 0$$

and i feel a bit confused... Can anyone help me pls?

2

There are 2 best solutions below

3
On BEST ANSWER

Subject to the constraint that $\xi \in [0,\theta]$, we must have $X_i \in [1, e^{\theta}]$; conversely, we must have $\theta \ge \max_i \log X_i$ because once you have observed some value $X$ from your sample, it is impossible for $\theta$ to be less than $\log X$. Thus the likelihood function is constrained from below by the logarithm of the largest observation in your sample, and since $\mathcal L(\theta \mid \boldsymbol x) \propto \theta^{-n}$ is a monotonically decreasing function of $\theta$ for a fixed $n > 0$ and $\theta > 0$, it immediately follows that the likelihood is maximized precisely when we choose $\theta = \max_i \log X_i$.

This situation is analogous to the MLE of the uniform distribution itself (and this fact should not be surprising): the MLE of $\theta$ if we were instead given the sample $\boldsymbol \xi$ (i.e., the untransformed observations), is simply $\max_i \xi_i$. The relationship is evident: if you have the transformed (exponentiated) sample $\boldsymbol X$, then you can take its logarithm and compute the MLE in the untransformed space. If you have the original sample $\boldsymbol \xi$, you can exponentiate it and get the transformed sample $\boldsymbol X$.

It is also worthwhile to note that calculating critical points is not the end-all and be-all to finding extrema of functions: this strategy works when the function is differentiable and continuous, but as we can remember from elementary calculus, it doesn't always capture global extrema on an interval: it captures relative extrema. If I asked you to give me the maximum of $f(x) = x^2$ on the interval $x \in [-2,2]$, would you then take the derivative $f'(x) = 2x$, set it to $0$, and conclude the maximum is at $x = 0$? That would be absurd. Or less trivially, what is the maximum of $f(x) = e^{-x}$ on $x \in [1,\infty)$? There are no critical points, yet it is obvious that $e^{-x}$ is maximized on this interval when $x = 1$.

0
On

The likelihood writes \begin{equation} L = \frac{1}{\theta^n} \prod_{i=1}^n \frac{\boldsymbol{1}_{[0,\theta]}(\ln X_i)}{X_i} \, , \end{equation} which is nonzero if $0 \leqslant \ln X_i \leqslant \theta$ for all $i$, i.e. $\max_i \ln X_i \leqslant \theta$. In this case, the likelihood $L$ decreases like $\theta^{-n}$ with respect to $\theta$. Therefore, the maximum likelihood estimator of $\theta$ is \begin{equation} \hat{\theta}_n = \max_{i=1\dots n} \ln X_i \, . \end{equation}