Comparing Risk of Estimators

58 Views Asked by At

I would like to compare the risk of 2 estimators. Let $X_1,..,X_n$ are $\mathcal{N}(\theta,1)$. I have to show that the average $\frac{1}{n}\sum X_i$ is not admissible if we have prior knowledge that $\theta \in [-1,1]$. Therefore I need to show that it exits an estimator with smaller Risk and we assume quadratic risk: $R(\theta, T) = E[(\theta-T(X_1, .., X_n)^2]$.

I tried it with $n=1$ and with the estimator $T(x) = \begin{cases} X & if & |X| < 1 \\ 0 & else \end{cases}$

and arrive at: $R(\theta, \bar{X}) = R(\theta, T) -2E[X\mathbb{1}_{\{|X| > 1\}}] + E[X^2\mathbb{1}_{\{ |X| > 1\}}]$. But I am not able to show that the last 2 terms are larger than 0. Or rather not able to show that $-2E[X\mathbb{1}_{\{|X| > 1\}}] < E[X^2\mathbb{1}_{\{ |X| > 1\}}]$

Any help is appreciated.

1

There are 1 best solutions below

0
On

Your choice of estimator basically suggests choosing the sample mean if it is between $-1$ and $1$; otherwise, choose $0$. But why $0$? If for instance $\theta = 1$, and we happen to observe many "large" values of $X_i$ in the sample, then your estimator will claim $\hat \theta = 0$ when in fact $\hat \theta = 1$ is an intuitively better estimate. A similar situation holds when $\theta = -1$, yet you would again choose to say the parameter is zero. That is the reason why you are having difficulty demonstrating that your estimator has smaller quadratic risk--because it might not when compared to the raw sample mean.

In light of this, what estimator do you think would perform better than the sample mean in this case? What does it look like?