MLE estimation for two parameter pareto (With slightly different PDF)

871 Views Asked by At

The PDF is given as $f(X=x) = \frac{\alpha \lambda^\alpha}{(\lambda + x)^{\alpha + 1}} \forall_x > 0, \alpha > 0, \lambda >0$

I found the log-likelihood to be:

$ln(L(\alpha,\lambda)) = n \cdot ln(\alpha) + \alpha \cdot n \cdot ln(\lambda) - (\alpha + 1) \cdot \displaystyle\sum_{i=1}^{n}ln(\lambda+x_i)$

Solving for lambda is not possible, and for alpha I get

$\hat \alpha = \frac{n}{\displaystyle\sum_{i=1}^{n}ln(\lambda+x_i) - n \cdot ln(\lambda)}$

How can I find the MLE for $\lambda$ and $\alpha$ from here? We are allowed to use R to calculate should loops be needed. We are given a dataset of 30 values from the pareto.

1

There are 1 best solutions below

0
On

To make the notation simpler, let us rewrite $$\log\left(L(\alpha,\lambda)\right) = n \, \log(\alpha) + \alpha \, n \, \log(\lambda) - (\alpha + 1) \, \displaystyle\sum_{i=1}^{n}\log(\lambda+x_i)$$ as $$\log\left(L(\alpha,\lambda)\right) = n \, \log(\alpha) + \alpha \, n \, \log(\lambda) - (\alpha + 1) \,S(\lambda)$$

Taking the derivative with respect to $\alpha$ , you properly showed that $$\alpha=\frac{n}{S(\lambda )-n \log (\lambda )}$$ So, consider that $\alpha=\alpha(\lambda )$

Now, cancel the derivative with respect to $\lambda$ to get $$n\frac{\alpha(\lambda ) }{\lambda }=(\alpha(\lambda ) +1)\, S'(\lambda )$$ which is "just" a nonlinear equation in $\lambda$ to solve.

Provided a "reasonable" starting guess of the solution, Newton method (even using numerical derivatives) would do the job.