Proving completeness of the average of a random normal sample

552 Views Asked by At

Suppose that $n$ is a fixed positive integer and $\theta$ is a parameter belonging to $\Theta=\mathbb{R}$. Suppose that we are given that $Y_1,\ldots,Y_n$ are i.i.d. $N(\theta,1)$. I'm trying to show that $T(Y)=\frac{1}{n}\sum_iY_i$ is complete: $$ h\text{ being a function s.t. }E_\theta(h(T))=0 \forall \theta\in\Theta\implies h\equiv 0. $$ Because $T\sim N(\theta,\frac{1}{n})$, I have written out $$ E_\theta(h(T))=\int_{-\infty}^{\infty}\sqrt{\frac{n}{2\pi}}\exp\left(-\frac{n}{2}(t-\theta)^2\right)h(t) \, dt. $$ The book I'm using hints that I should look at the Laplace transform but I'm not sure how to proceed. Thank you for your help.

Edit: the wiki article on completeness has this example and also alludes to a solution using the Laplace transform. If someone can fill in the steps so I can learn how these things are done, I'll really appreciate it.

2

There are 2 best solutions below

1
On BEST ANSWER

You want a function $h$ such that for all $\theta\in\mathbb R$, the following integral is zero: $$\int_{-\infty}^\infty \exp\left(-\frac{n}{2}(t-\theta)^2\right)h(t) \, dt.$$ This is $$ \int_{-\infty}^\infty \exp\left(-\frac n 2 \theta^2\right) \exp(nt\theta)\exp\left(-\frac n 2 t^2\right)h(t) \, dt. $$ The first factor does not depend on $t$ so it can be pulled out of the integral; then, since it is never $0$, we can divide by sides of the equality that sets this integral to $0$ by it. Then we have $$ \int_{-\infty}^\infty \exp(nt\theta) \underbrace{\exp\left(-\frac n 2 t^2\right)h(t)} \, dt = 0\text{ for all values of }\theta. $$ The expression over the $\underbrace{\text{underbrace}}$ does not depend on $\theta$; it is just a function of $t$. Call that expression $g(t)$.

Now we have $$ (\mathcal L g)(-n\theta) = \int_{-\infty}^\infty e^{n\theta t} g(t)\,dt = 0\text{ for all values of $-n\theta$.} $$ where for any function $f$, $$ (\mathcal L f)(\eta) = \int_{-\infty}^\infty e^{-\eta t} f(t)\,dt $$ is the two-sided Laplace transform of $f$.

So we're saying a two-sided Laplace transform of $g$ is everywhere $0$. A theorem whose name I find I have forgotten says this two-sided Laplace transform is one-to-one, so we must have $g=0$ everywhere. Since $g$ is $h$ multiplied by something that is nowehere zero, we must have $h=0$ everywhere.

Once you've done all this and also proved sufficiency, then the Lehmann–Scheffé theorem then implies that the sample mean is the best estimator.

1
On

...or, consider your integral being the squared integral of (exp(-r^2)dr, from 0 to Infinity) in polar coordinates which can be solved easely. The result is sqrt{1/2*sqrt(pi)*erf (Infinity)}. erf(infinity) is a fancy way of saying "1".