Show that the method of moments estimator of $\theta$ is biased

570 Views Asked by At

Let ${X_1},...,{X_n}$ a random sample from the probability distribution with pdf $f(x) = \theta {x^{\theta - 1}}$, $0 < x < 1$ where $\theta > 0$

I've found the method of moments estimator for $\theta$ to be : ${\hat \theta _{mom}} = \frac{{\overline X }}{{1 - \overline X }}$

Next, I'm asked to show that ${\hat \theta _{mom}}$ is biased if $n=1$ and $\theta=1$. I'm struggling to understand the solution provided, which follows: If $n=1$ and $\theta=1$ then $f(x)=1$ and $X_1=\overline X$

$$E[{\hat \theta _{mom}}] = E\left[ {\frac{{{X_1}}}{{{X_1} + 1}}} \right] = 1 - E\left[ {\frac{1}{{{X_1} + 1}}} \right] = 1 - \int\limits_0^1 {\frac{1}{{1 - x}}dx = - \infty \ne \theta }$$

I understand why a result of $- \infty $ not being equal to theta proves it is biased, however I don't understand how the expectations and the integral have been formulated.

If ${\hat \theta _{mom}} = \frac{{\overline X }}{{1 - \overline X }}$ then why is $E[{\hat \theta _{mom}}]$ not equivalent with $E\left[ {\frac{{{X_1}}}{{1 - {X_1}}}} \right]$? And how is the expectation equivalent with the integral?

I hope somebody can shed some light on what it is that I'm missing here.

2

There are 2 best solutions below

0
On BEST ANSWER

There are 2 typos in the solution from your question.

$\theta_{mom} \ne {\frac{{{X_1}}}{{{X_1} + 1}}} $ because $\theta_{mom} = \frac{X_1}{1-X_1}$.

And if we will work with $\theta^* = {\frac{{{X_1}}}{{{X_1} + 1}}} $, then instead of

$$E \theta^* = E\left[ {\frac{{{X_1}}}{{{X_1} + 1}}} \right] = 1 - E\left[ {\frac{1}{{{X_1} + 1}}} \right] = 1 - \int\limits_0^1 {\frac{1}{{1 - x}}dx }$$ we should have $$1 - \int\limits_0^1 {\frac{1}{{1 + x}}dx}$$

0
On

With a different approach it is easy to show that the result is valid in general, $\forall n, \theta$

Observe that you are not asked to explicitly calculate $E(\hat{\theta}_{MM})$ but only to show that it is not $\theta$

Your estimator is

$$\hat{\theta}_{MM}= \frac{\overline{X}_n}{1- \overline{X}_n }$$

Thus, using Jensen's inequality you get

$$E(\hat{\theta}_{MM})\ne \frac{E(\overline{X}_n)}{1-E(\overline{X}_n) }=\frac{E(X)}{1-E(X)}=\frac{\frac{\theta}{\theta+1}}{1- \frac{\theta}{\theta+1}} =\theta$$

and you are all set!


On the other hand, as sample mean is consistent for $E(X)=\frac{\theta}{1+\theta}$ and $g(z)=\frac{z}{1-z}$ is continuous in $z$, then $\hat{\theta}_{MM}$ is consistent for $\theta$