I'm working on a problem in Statistics, as follows:
Let $X_1, ..., X_n$ be a random sample from a Poisson distribution with parameter $\theta$. Denote $T_n = \sum_{i=1}^n X_i$.
a) Show that the sample mean $\overline{X} = T_n/n$ is an efficient estimator.
b) Suppose that $g(\theta) = P(X=0) = e^{-\theta}$. For the minimal variance unbiased estimator $\hat{g}(\theta) = (1-\frac{1}{n})^{T_n}$, prove that the Cramer-Rao lower bound is not achievable.
I've got part (a) just fine by showing that $Var(\overline{X}) = \frac{1}{nI(\theta)}$, where $I(\theta)$ denotes the Fisher information -- that is, the variance of $\overline{X}$ attains the Cramer-Rao lower bound.
I'm struggling with (b). I tried to approach it the same way as (a) by showing that $Var(\hat{g}(\theta)) \neq \frac{1}{nI(\hat{g}(\theta))}$. However, when I try to compute the Fisher information for $I(\hat{g}(\theta)) = -E(\frac{d^2}{d \theta^2} log(\hat{g}(\theta))$, I run into a problem -- the first derivative of $log(\hat{g}(\theta))$ with respect to $\theta$ ends up being zero, since there are no $\theta$'s involved in the formula for $\hat{g}(\theta)$.
How can I refine my logic for part (b) ?
Thanks!
For part (a), I would rather just show that $\overline X$ satisfies the condition of equality in the Cramer-Rao inequality.
For $x=(x_1,\ldots,x_n)$ with $x_i\in\{0,1,\ldots\}$ for all $i$, pmf of $(X_1,\ldots,X_n)$ is
$$p_{\theta}(x)=\frac{e^{-n\theta}\theta^{n\bar x}}{\prod_{i=1}^n (x_i!)}$$
Therefore,
$$\frac{\partial}{\partial\theta}\ln p_{\theta}(x)=\frac{n}{\theta}(\bar x-\theta)\tag{*}$$
This is precisely the equality condition, i.e. $\frac{\partial}{\partial\theta}\ln p_{\theta}(x)$ is proportional to $T(x)-\theta$ with $T(x)=\bar x$.
Since $\overline X$ is unbiased for $\theta$, equation $(*)$ implies that $\overline X$ is the minimum variance unbiased estimator of $\theta$ with variance of $\overline X$ attaining the Cramer-Rao lower bound for $\theta$.
For (b), first find the CR lower bound for $g(\theta)=e^{-\theta}$. It is given by $$\text{CRLB}(g(\theta))=\frac{(g'(\theta))^2}{I(\theta)}\quad,$$
where $I(\theta)=\frac{n}{\theta}$ is the information within the whole sample.
That is, $$\text{CRLB}(g(\theta))=\frac{\theta e^{-2\theta}}{n}$$
Now with $a=1-\frac1n$ for $n>1$,
\begin{align} \operatorname{Var}_{\theta}(a^{T_n})&=\operatorname{E}_{\theta}\left[(a^{T_n})^2\right]-\left(\operatorname{E}_{\theta}[a^{T_n}]\right)^2 \\&=\operatorname{E}_{\theta}\left[(a^2)^{T_n}\right]-(g(\theta))^2\tag{**} \end{align}
Since $T_n=\sum\limits_{i=1}^n X_i\sim \mathsf{Poisson}(n\theta)$, it is true that $\operatorname{E}(c^{T_n})=e^{n\theta(c-1)}$ for any constant $c$.
So $(**)$ I think reduces to $$\operatorname{Var}_{\theta}(a^{T_n})=\exp\left[n\theta(a^2-1)\right]-e^{-2\theta}=\cdots=e^{-2\theta}(e^{\theta/n}-1)$$
Finally take the ratio $\operatorname{Var}_{\theta}(a^{T_n})/\text{CRLB}(g(\theta))$:
$$\frac{\operatorname{Var}_{\theta}(a^{T_n})}{\text{CRLB}(g(\theta))}=\frac{n(e^{\theta/n}-1)}{\theta}=\frac{n}{\theta}\left(\frac{\theta}{n}+\frac{\theta^2}{2n^2}+\cdots\right)=1+\frac{\theta}{2n}+\cdots>1$$