Finding the MLE of $(1-p)^3$ from a geometric distribution

3.1k Views Asked by At

I found the MLE of p from this:

Let $X_1,X_2,X_3.....X_n$ be a random sample from the geometric distribution with p.d.f. $f(x;p)=(1−p)^{x−1}p,x=1,2,3....$

The likelihood function:

$L(p)=(1−p)^{x_1−1}p(1−p)^{x_2−1}p...(1−p)^{x_n−1}p=p^n(1−p)^{\sum_{i=1}^nx_i−n}$

Taking log,

$\ln L(p)=n\ln p+(\sum_{i=1}^n x_i−n)\ln(1−p)$

Differentiating and equating to zero, we get,

$\frac{d[\ln L(p)]}{dp}=\frac np−\frac{(∑^n_1x_i−n)}{(1−p)}=0$

Therefore,

$p=\frac{n}{∑^n_1x_i}$

So, the maximum likelihood estimator of P is:

$P=\frac n{∑^n_1X_i}=1/\overline X$

But I'm unsure of how to obtain the MLE of $1/p$ and $(1-p)^3$

Is it done through a similar method?

1

There are 1 best solutions below

4
On

Yes, you should go similar way. First express $p$ through the new variable $\theta=\frac1p$ or $\theta=(1-p)^3$ and rewrite p.d.f in terms of $\theta$.

Say, for the first case $p=\frac1\theta$, $$ f(x;p(\theta))=\left(1−\frac1\theta\right)^{x−1}\frac1\theta,\quad x=1,2,3.\ldots. $$

The likelihood function: $$ L(p(\theta))=\left(1−\frac1\theta\right)^{X_1−1}\frac1\theta \cdot \ldots \cdot \left(1−\frac1\theta\right)^{X_n−1}\frac1\theta = \dfrac{1}{\theta^n}\left(1−\frac1\theta\right)^{\sum_1^n X_i−n} $$ It is also suitable to write $n\overline X$ instead of $\sum\limits_1^n X_i$. $$ \ln L(p(\theta))= -n\ln \theta + (n\overline X - n)\ln\left(1−\frac1\theta\right) = -n\ln \theta + n(\overline X - 1)\bigl(\ln(\theta−1)-\ln\theta\bigr). $$ Differentiate by $\theta$: $$ \dfrac{dL(p(\theta))}{d\theta} = -\dfrac{n}\theta+ n(\overline X - 1)\left(\frac{1}{\theta-1}-\dfrac1\theta\right). $$ The solution of $\dfrac{dL(p(\theta))}{d\theta}=0$ is $\hat\theta=\overline X$. We can also check that this point maximize $L(p(\theta))$. Say, the second derivative will help. It is negative at this point.

The other way is to note that $$\dfrac{dL(p(\theta))}{d\theta}=\dfrac{dL(p)}{dp}\times\dfrac{dp(\theta)}{d\theta}$$ Here $L(p(\theta))$ is your initial $L(p)$, and the first factor you have found already. And both cases $\dfrac{dp(\theta)}{d\theta}\neq 0$ for $0<p<1$ since both functions $\theta=\frac1p$ and $\theta=(1-p)^3$ are strictly monotone.

So, the value $\hat p=\dfrac{1}{\overline X}$ which causes $\dfrac{dL(p)}{dp}$ equal to zero, does the same with $\dfrac{dL(p(\theta))}{d\theta}$. And you need only to solve $$ \hat p=\dfrac{1}{\overline X} = \frac{1}{\hat\theta}, \quad \hat\theta = \overline X. $$

Edit: Note that you can use the invariance property and write out both estimates immediately, instead of finding them by definition.