MLE of $p$ for a sequence of RVs with distribution $ P(X_i = k) = (k+1) p^2 (1-p)^k $

46 Views Asked by At

Let $ X_i $ be a sequence of iid rv´s with distribution

$ P(X_i = k) = (k+1) p^2 (1-p)^k $ with $ k \in \mathbb{N}_0 $ where we set $ \theta := p \in (0,1] $ and $ p $ is unknown. Compute the MLE $ \theta $ for $ \tau(\theta) = \theta $

My solution:

The Maximum - Liklihood function is $ L(x,\theta) = \prod_{i=1}^n (k_i+1) \theta^2 (1-\theta)^{k_i} = \theta^{2n} \prod_{i=1}^n (k_i+1)(1-\theta)^{k_i} $

The log - Liklihood function is

$ \log L(x,\theta) = 2n \log(\theta) + n \log(1-\theta) \sum_{i=1}^n (k_i +1) $

so we get:

$ \frac{d}{d\theta} \log L(x,\theta) = \frac{2n}{\theta} - \frac{n}{1 - \theta} \sum_{i=1}^n (k_i +1) \overset{!}=0 $

I get that $ \frac{2 -2\theta}{\theta} = \sum_{i=1}^n (k_i + 1) \Leftrightarrow \theta = \frac{2}{\sum_{i=1}^n (k_i+1)+2}$

I am not sure if that is the correct estimator. Thats not asked in the task, but how do i check in this case, if the estimator is unbiased?

1

There are 1 best solutions below

2
On BEST ANSWER

You made a mistake finding the log-likelihood function. Log-likelihood function is $$ \log L(X_1,\ldots,X_n,\theta) = 2n \log(\theta) + \sum_{i=1}^n X_i \log(1-\theta) + \sum_{i=1}^n \log(X_i +1) $$ And then $$ \frac{d}{d\theta} \log L(X_1,\ldots,X_n,\theta) = \frac{2n}{\theta} - \frac{\sum_{i=1}^n X_i}{1 - \theta} \overset{!}=0 $$ It is convinient to replace $\sum_{i=1}^n X_i$ by $n\overline X$. Finally the estimator as $$ \hat\theta_n=\frac{2}{\overline X+2}. $$ It is biased by Jensen inequality. Indeed, $\overline X\geq 0$ and the function $\frac{2}{x+2}$ is convex for $x\geq 0$, therefore Jensen inequality gives $$ \mathop{\mathbb E}\left(\frac{2}{\overline X+2}\right) > \frac{2}{\mathop{\mathbb E}\bigl(\overline X\bigr)+2}=\frac{2}{\mathop{\mathbb E}\left(X_1\right)+2}=\frac{2}{2\frac{1-\theta}{\theta}+2}=\theta. $$ The inequality is strict since the function is not linear and $\overline X$ is not a constant. So the estimator is biased.

In order to check its asymptotic unbiasedness you can use bounded convergence theorem.

First of all, $\hat\theta_n \xrightarrow{p}\theta$ as $n\to\infty$ by LLN so the estimator is consistent. And also $\hat\theta_n$ is bounded: $0<\hat\theta_n\leq 1$ for each $n$. Then $\lim_{n\to\infty}\mathbb E(\hat\theta_n)=\theta$.