Which estimator is better?

415 Views Asked by At

$X=\left(X_{1},...,X_{n}\right), Y=\left(Y_{1},...,Y_{n}\right)$ - independent sample from distributions $\mathcal{N}\left(m_{x},\sigma^{2}\right)$ and $\mathcal{N}\left(m_{y},\sigma^{2}\right)$. Which of these two estimators of $m_xm_y$: $$T_{1}\left(X,Y\right) = \overline{X}\overline{Y}\\ T_{2}\left(X,Y\right) = \frac{1}{n}\sum_{i=1}^{n}X_{i}Y_{i}$$ is better when we apply quadratic loss function? Can I ask for a hint?

1

There are 1 best solutions below

10
On BEST ANSWER

Both estimators are unbiased thus, according to the Classical Statistical Theory, you can evaluate them comparing their variance.

After some calculations I get

$$\mathbb{V}[T_1]=\frac{\sigma^2}{n}\left[\frac{\sigma^2}{n}+m_X^2+m_Y^2\right]$$

$$\mathbb{V}[T_2]=\frac{\sigma^2}{n}[\sigma^2+m_X^2+m_Y^2]$$

thus I prefer $T_1$ as its variance is always lower that the one of $T_2$ for any $n>1$


To compare two estimators in Classical Statistical Theory, the one having the lower MSE is preferred.

By definition,

$$MSE_{T_1}=\mathbb{E}[T_1-\mathbb{E}[T_1]]^2$$

In your statement you assumed immediately that $\mathbb{E}[T_1]=\mathbb{E}[T_2]=m_X\cdot m_Y$

... which is correct but it has to be verified first...


$$\mathbb{V}[T_1]=\mathbb{V}[\overline{X}\overline{Y}]=\mathbb{E}\left[(\overline{X})^2\right]\cdot \mathbb{E}\left[(\overline{Y})^2\right]-m_X^2\cdot m_Y^2$$

now observe that $\overline{X}\sim N(m_X ;\sigma^2/n)$

thus

$$\mathbb{V}[T_1]=\left(\frac{\sigma^2}{n}+m_X^2\right)\left(\frac{\sigma^2}{n}+m_Y^2\right)-m_X^2\cdot m_Y^2$$