How do I find the M.o.M and M.L.E estimators of a parameter?

95 Views Asked by At

I have the following problem and I would like to find the M.o.M and M.L.E estimators of the parameter λ. The function is a probability density function.

$ f(x) = { \lambda \theta ^ \lambda x^{-\lambda -1}, {x \geq \theta , \lambda>1} }$

Thanks in advance.

1

There are 1 best solutions below

0
On
  • MoM estimator

If you calculate the expected value of your law, you'll find $E(X)=\frac{\theta \lambda}{\lambda-1}$. You can inverse this function and have

$$\lambda = \frac{\frac{E(X)}{\theta}}{\frac{E(X)}{\theta}-1}= \frac{E(X)}{E(X)-\theta}$$

So, if you have a sample of $n$ values $X_1,\dots X_n$, as you can estimate $E(X)$ by the empirical mean $\mu = \frac{1}{n}\sum_{i=1}^n X_i$, your MoM estimator would be: $$\hat{\lambda}=\frac{\frac{1}{n}\sum_{i=1}^n X_i}{\frac{1}{n}\sum_{i=1}^n X_i-\theta}$$

  • MLE estimator

The likelihood for $n$ values is defined as :

$$L(x_1,\dots, x_n, \lambda)=\prod_{i=1}^n f(x_i,\lambda)$$

It is more convenient to use the log-likelihood:

$$l(x_1,\dots, x_n,\lambda) = \ln L(x_1,\dots, x_n,\lambda) = \sum_{i=1}^n \ln f(x_i,\lambda)$$

As $\ln f(x,\lambda) = \lambda \ln (\theta) + \ln(\lambda) -(\lambda+1)\ln(x)$, the log-likelihood is equal to

$$l(x_1,\dots, x_n,\lambda) = n\lambda\ln(\theta)+n\ln(\lambda)-(\lambda+1)\sum_{i=1}^n \ln(x_i)$$

The all strategy of the MLE estimator is to find $\lambda$ such that the (log-)likelihood is maximized, i.e. $\lambda$ such that $\frac{\partial}{\partial \lambda}l = 0$. Here,

$$\frac{\partial}{\partial \lambda}l(x_1,\dots, x_n,\lambda) = n\ln(\theta) + \frac{n}{\lambda}-\sum_{i=1}^n \ln(x_i)$$

Solving $\frac{\partial}{\partial \lambda}l = 0$ leads to

$$\lambda = \frac{n}{\sum_{i=1}^n \ln(x_i) - n\ln(\theta)} = \left( \sum_{i=1}^n \ln\left(\frac{x_i}{\theta}\right)\right)^{-1}$$

So, the MLE estimator for $\lambda$, provided a sample of size $n$, $X_1,\dots, X_n$, is: $$\hat{\lambda} = \left( \sum_{i=1}^n \ln\left(\frac{X_i}{\theta}\right)\right)^{-1}$$

  • Conclusion

Both techniques give different estimators in this case. To know which one is the best, you'll have the investigate the bias, convergence and consistency of the estimators.