If $X\sim\operatorname{Poisson}(u)$ and $\theta = \mathbb{P}\{X=0\} = e^{-u}$, is $\hat{\theta}_1 = e^{-X}$ an unbiased estimator?

353 Views Asked by At
  1. If $X\sim\operatorname{Poisson}(u)$ and $\theta = \mathbb{P}\{X=0\} = e^{-u}$, is $\hat{\theta}_1 = e^{-X}$ an unbiased estimator?

Here's what I tried, is this right?
$$ \begin{align} \mathbb{E}[\hat{\theta}_1] &= \mathbb{E}[e^{-X}] \\ &= e^{\mathbb{E}[-X]}\\ &= e^{-u} \\ &= \theta \end{align} $$

  1. Show that $\hat{\theta}_2 = w(X)$ is an unbiased estimator of $\theta$, where $w(0)=1$ and $w(x)=0$ if $x> 0$.

I honestly don't know how to do this part. Thanks for the help!

  1. Compare MSEs of $\hat{\theta}_1$ and $\hat{\theta}_2$ for estimating $\theta = e^{-u}$ when $u=1$ and $u=2$

So the MSE is equal to $Var() + Bias()^2$

Since $\hat{\theta}_2$ is unbiased, $MSE=Var(\hat{\theta}_2)$ My variance is $E(X^2)-E(X)^2$.

I think $E(X)^2$ is $(e^{-u})^2=e^{-2u}$, and for $E(X^2)$ do I just use the $\sum_{k=0}^\infty(w(k)^2P[X=k])$?

2

There are 2 best solutions below

5
On

In part (1) there is no theorem that states $\mathbb E(\exp Y) = \exp \mathbb E(Y)$. You can't move the expectation past the exponential. Instead, use the general formula: $$ \mathbb E(g(X))=\sum_{k=0}^\infty g(k)P(X=k), $$ which is valid for any function $g$ when $X$ takes values $0, 1, 2,\ldots$. For part (1) the formula gives $$ \mathbb E(e^{-X}) = \sum_{k=0}^\infty e^{-k}P(X=k) = \sum_{k=0}^\infty e^{-k}e^{-u}{u^k\over k!} = e^{-u}\sum_{k=0}^\infty {(u e^{-1})^k\over k!} = e^{-u}e^{u(e^{-1})} = e^{u(e^{-1}-1)}, $$ agreeing with the answer of @Clement C. For part (2) the formula gives $$ \mathbb E(w(X)) = \sum_{k=0}^\infty w(k)P(X=k) =\sum_{k=0}w(k)P(X=k) + \sum_{k=1}^\infty w(k)P(X=k) $$ Can you take it from there?

For part (3), to get the variance of a generic estimator $\hat\theta$, you are correct to use: $$ V(\hat\theta) = E(\hat\theta^2) - [E(\hat\theta)]^2. $$ You already calculated $E(\hat\theta)$ in parts (1) and (2). As for $E(\hat\theta^2)$: For part (1) we have $\hat\theta_1:=e^{-X}$, so $$ E[\hat\theta_1^2] = E[(e^{-X})^2] = E[e^{-2X}] = E[g(X)] $$ where $g(x):=e^{-2x}$, while for part (2) we have $\theta_2:=w(X)$, so $$E[\hat\theta_1^2] = E[w(X)^2] = E[w(X)]$$ since $w(x)$ takes values only $0$ and $1$. So in both cases you can apply the $\mathbb Eg(X)$ formula.

0
On

Note that $$\operatorname{E}[\hat \theta_1] = \operatorname{E}[e^{-X}] = M_X(-1),$$ where $M_X(t) = \operatorname{E}[e^{tX}]$ is the moment generating function of $X$. For $X \sim \operatorname{Poisson}(u)$, we can compute $$M_X(t) = \sum_{x=0}^\infty e^{tx} e^{-u} \frac{u^x}{x!} = \sum_{x=0}^\infty e^{-u} \frac{(ue^t)^x}{x!} = e^{u(e^t-1)} \sum_{x=0}^\infty e^{-(ue^t)}\frac{(ue^t)^x}{x!} = e^{u(e^t-1)},$$ hence $$\operatorname{E}[\hat\theta_1] = e^{u(e^{-1} - 1)} \ne e^{-u},$$ thus $\hat \theta_1$ is biased for the parameter $\theta = e^{-u}$, for a single observation drawn from such a distribution.

Now suppose $$\hat \theta_2 = \mathbb 1 (X = 0) = \begin{cases} 1, & X = 0 \\ 0, & X > 0. \end{cases}$$ We evaluate by the law of total probability $$\operatorname{E}[\hat\theta_2] = \operatorname{E}[\hat \theta_2 \mid X = 0] \Pr[X = 0] + \operatorname{E}[\hat \theta_2 \mid X > 0] \Pr[X > 0] = \Pr[X = 0] = \theta.$$ Thus $\hat \theta_2$ is unbiased for $\theta$.

To find the MSE of $\hat \theta_1$ and $\hat \theta_2$, we need to compute the variance of these estimators. Note $$\operatorname{Var}[\hat\theta_1] = \operatorname{E}[\hat\theta_1^2] - \operatorname{E}[\hat\theta_1]^2,$$ and the first term is $$\operatorname{E}[\hat\theta_1^2] = \operatorname{E}[e^{-2X}] = M_X(-2) = e^{u(e^{-2}-1)}.$$ Combined with the previous computation of the expectation, we find $$\begin{align*} \operatorname{MSE}[\hat\theta_1] &= e^{u(e^{-2} - 1)} - e^{2u(e^{-1} - 1)} + (e^{u(e^{-1} - 1)} - e^{-u})^2 \\ &= e^{-2u}\left(1 - 2e^{ue^{-1}} + e^{u(e^{-2} + 1)}\right). \end{align*}$$ Since $\hat \theta_2$ is unbiased, its MSE is equal to its variance, which we compute as $$\operatorname{MSE}[\hat\theta_2] = \operatorname{Var}[\hat \theta_2] = \operatorname{E}[\hat\theta_2^2] - \operatorname{E}[\hat\theta_2]^2,$$ but note that $$\operatorname{E}[\hat\theta_2^2] = \operatorname{E}[\hat\theta_2] = \theta,$$ because $$\hat\theta_2^2 = \hat\theta_2.$$ Hence $$\operatorname{MSE}[\hat\theta_2] = \theta(1-\theta) = e^{-u} (1 - e^{-u}).$$ I leave it to you to finish the exercise to determine which MSE is smaller for $u = 1$ and $u = 2$.