Applying Chain rule to dependent variables on statistics

135 Views Asked by At

Here is the problem that I'm dealing with:

$x_1, x_2, \ldots, x_n$ are independent and identical samples from the distribution with pdf $f(x) = \lambda e^{-\lambda (x-\mu)}$ for $x \ge \mu$ and $f(x) =$ when $x < \mu$, where $\lambda, \mu >0$ are unknown parameters.

Find the maximum liklihood estimators for $\beta$ where $\beta = \mu + 1/\lambda$

Since the joint density is given as $f = \lambda^n e^{-n\lambda\Sigma(x_i-\mu)}$, I take $\log f$ and tried differentiation with beta to find maximum.

$$\frac{d \log f}{d \beta} = \frac{\partial \log f}{\partial \mu} \frac{\partial \mu}{\partial \beta} + \frac{\partial \log f}{\partial \lambda} \frac{\partial \lambda}{\partial \beta}$$

But I'm not sure if this is appropriate since I regarded $\mu$ as a function of $\beta$ and $\lambda$ to find a partial derivative, and then regarded $\lambda$ as a function of $\beta$ and $\mu$ to find another partial derivative.

Is this appropriate method? Or should I deal with other approach?

2

There are 2 best solutions below

0
On BEST ANSWER

You have $\beta = \mu+ \dfrac 1 \lambda.$ The equivariance of maximum-likelihood estimation (which people often call "invariance") says that if $\widehat\mu$ and $\widehat\lambda$ are the MLEs of $\mu$ and $\lambda$ respectively, then the MLE of $\beta$ is $\widehat\mu + \dfrac 1 {\widehat\lambda}.$

You have an extra factor of $n$ in front of the sum. I've deleted it below:

You have $$ L(\mu,\lambda) = \begin{cases} \lambda^n \exp\left(-\lambda\sum_{i=1}^n (x_i-\mu)\right) & \text{if } \mu\le\text{all of } x_1,\ldots,x_n, \\ 0 & \text{if } \mu > \text{at least one of } x_1,\ldots,x_n. \end{cases} $$

First, notice that $L(\mu,\lambda)$ increases as $\mu$ decreases, until $\mu$ gets smaller than one of $x_1,\ldots,x_n.$

Therefore you have $\widehat\mu = \min\{x_1,\ldots,x_n\}.$

So you have $$ L(\min\{x_1,\ldots,x_n), \lambda) = \lambda^n \exp\left( -\lambda \sum_{i=1}^n (x_i-\min\{x_1,\ldots,x_n\}) \right) $$ $$ \ell(\min,\lambda) = \log L(\min,\lambda) = n\log\lambda - \lambda \sum_{i=1}^n (x_i-\min) $$ $$ \frac d {d\lambda} \ell(\min,\lambda) = \frac n \lambda - \sum_{i=1}^n (x_i-\min) \begin{cases} \ge 0 & \text{if } \lambda\le\text{something}, \\ \le 0 & \text{if } \lambda \ge\text{something.} \end{cases} $$ That gives you the MLE for $\lambda.$

0
On

Firstly I would substitute $x-\mu$ by $y$. Then the pdf is

$$f(y)=\begin{cases}\lambda \cdot e^{-\lambda y}, & y\geq 0 \\ 0, & \texttt{elsewhere} \end{cases}$$

It can be shown that the maximum likelihood estimator for $\frac1{\lambda}$ is $\frac{\sum\limits_{i=1}^n y_i}{n}$

And $\frac{\sum\limits_{i=1}^n y_i}{n}=\frac{\sum\limits_{i=1}^n x_i}{n}-\mu$

Consequently the estimator for $\frac1{\lambda}+\mu$ is $\frac{\sum\limits_{i=1}^n x_i}{n}=\overline x$