Deriving MLE of $\theta$ in $\text{Exp}(\theta,\theta)$ distribution

445 Views Asked by At

Suppose $X_1,X_2,\ldots,X_n$ are i.i.d variables having a two-parameter exponential distribution with common location and scale parameter $\theta$ :

$$f_{\theta}(x)=\frac{1}{\theta}e^{-(x-\theta)/\theta}\mathbf1_{x>\theta}\quad,\,\theta>0$$

I am wondering if it is possible to derive a maximum likelihood estimator (MLE) of $\theta$.

The likelihood function given the sample $x_1,\ldots,x_n$ is

$$L(\theta)=\frac{1}{\theta^n}e^{-n(\bar x-\theta)/\theta}\mathbf1_{x_{(1)}>\theta}\quad,\,\theta>0$$

, where $\bar x=\frac{1}{n}\sum\limits_{i=1}^n x_i$ and $x_{(1)}=\min\limits_{1\le i\le n} x_i$.

Since $L(\theta)$ is not differentiable at $\theta=x_{(1)}$, I cannot apply the second-derivative test here.

Even if I could say that $L(\theta)$ is increasing and/or decreasing in $\theta$ under the constraint $\theta<x_{(1)}$, I am not sure what choice of $\theta$ maximises $L(\theta)$. Differentiation is not valid as I understand. I think it is safe to assume $x_{(1)}<\bar x$, so the constraint is actually $\theta<x_{(1)}<\bar x$.

If MLE is unique, then it is likely to be a function of the sufficient statistic $(\overline X,X_{(1)})$. However I don't see how to derive it in this particular model. Any suggestion would be great.

1

There are 1 best solutions below

2
On

The joint likelihood is $$\mathcal L (\theta \mid \boldsymbol x) = \theta^{-n} e^{-n(\bar x - \theta)/\theta} \mathbb 1(x_{(1)} \ge \theta).$$ Since the unique critical point of the unrestricted likelihood occurs for $$0 = \frac{d}{d\theta} \left[\log \mathcal L \right] = -\frac{n}{\theta} + \frac{n\bar x}{\theta^2} = \frac{n}{\theta^2}(\bar x - \theta)$$ or $\theta = \bar x \ge x_{(1)}$, and the likelihood is increasing on $\theta \in (0, x_{(1)}]$, it follows that the global maximum must be the minimum order statistic $\hat \theta = x_{(1)}$.

It is worth noting that while the MLE can exceed the true value of $\theta$, it cannot be larger than the minimum order statistic, since in such a case, such an estimate could not generate at least one observation in the sample. Also note that the MLE cannot be smaller than the true value of $\theta$, since such an estimate could only have been generated from a sample with an observation that is smaller than $\theta$, which is impossible. In short, we must always have $$0 < \theta \le \hat \theta \le x_{(1)} \le \bar x.$$