I have some questions about this problem as I'm reviewing for a qual. Our TA provided us with a solution, but I don't understand what is going on:
So it looks like they are trying to find an estimator $\delta(X)$ so that $E[\delta(X)] = g(\theta)$ ($g(\theta)$ is what we're trying to estimate). So I see they start with writing that out but I'm not sure why they are using $\gamma$ (for $n\lambda$ instead of $\lambda$). Then I'm wondering how they got the equality:
$$\sum_{t=0}^\infty \frac{\gamma^t}{t!}\delta(t) = \sum_{t=0}^\infty \frac{\gamma^t}{t!}\gamma^k?$$
It's like they just got rid of the $e^{-\gamma}$. So my two questions are:
1) Why are they using $\gamma = n\lambda$ instead of just $\lambda$?
2) Where did the equality above come from?
The rest of the problem makes sense assuming the equality is true!

(1) The statistic $T$ has Poisson($n\lambda$) distribution so the expectation of $\delta(T)$ must involve $n\lambda$: $$ E[\delta(T)] = \sum_{t=0}^\infty \delta(t)P(T=t)=\sum_{t=0}^\infty\delta(t)e^{-n\lambda}{(n\lambda)^t\over t!} $$ They then abbreviate $n\lambda$ as $\gamma$ for the sake of saving ink.
(2) The equality comes from the line above it (which is stating the requirement that $\delta(T)$ be unbiased for $\gamma^k$): we're multiplying both sides of the line above it by $e^\gamma$ and then writing $e^\gamma$ in a power series. We deduce what $\delta(t)$ must be by comparing the two power series. Note that after finding the right form for $\delta(t)$, we divide it by $n^k$ to obtain an unbiased estimator for $\lambda^k$.