Show that $\hat{\theta}$ is not minimax.

212 Views Asked by At

Let $\hat{\theta}$ be an unbiased estimator of an unknown parameter $\theta\in\mathbb{R}$. Assuming $\theta\neq 0$ we have the following loss function $$L(\theta,a) = \dfrac{(a - \theta)^2}{\theta^2}$$

Exercise: Assume that $0\leq R(\theta,T) <\infty$ for any estimator $T$. Show that $\hat{\theta}$ is not minimax.

Given solution: We consider an estimator $T = c\hat{\theta}$. In this case the risk function is given by $$R(\theta, c\hat\theta) = \operatorname{E}_\theta\bigg[\dfrac{(c\hat\theta - \theta)^2}{\theta^2}\bigg] \\= (1-c)^2 + c^2R(\theta,\hat\theta)$$

where the second term vanishes because $R(\theta, \hat\theta) = 0$, as $\hat\theta$ is unbiased. We look for a constant $c$ such that $$\sup_{\theta \in \mathbb{R}}R(\theta, c\hat\theta) < \sup_{\theta\in\mathbb{R}}R(\theta, \hat\theta),$$

which is equivalent to \begin{equation}\sup_{\theta\in\mathbb{R}}R(\theta, \hat\theta) > \dfrac{(1-c)^2}{1-c^2} = \dfrac{1-c}{1+c} =:g(c).\tag{1}\end{equation} Since $g:[0,1]\to[0,1]$ is continuous and decreasing, $(1)$ holds for all $c\in(0,1)$ if $\sup_\theta R(\theta,\hat\theta_n) > 1.$ If $\sup_\theta R(\theta,\hat\theta_n) \leq 1$, then there exists a $c^{*}$ such that $(1)$ holds for all $c\geq c^{*}$.

What I don't understand about this solution:

We have that $R(\theta, \hat\theta) = 0$ since $\hat\theta$ is unbiased, and $(R,c\hat\theta) = c^2 - 2c + 1$, as is indicated in the solution. Now, $c^2 -2c + 1 \geq 0,$ so there is no $c^{*}$ such that $R(\theta, c\hat\theta) < R(\theta, \hat\theta).$ Obviously I'm missing something, as I don't understand the subscript $n$ that is used in the last few lines in the solution. It seems to me that since in this solution $R(\theta,\hat\theta)$ can be larger than zero, $R(\theta,\hat\theta)$ depends on $\theta$.

Question: Is the given solution correct? If so; what's wrong with my reasoning? How can $R(\theta, \hat\theta)> 0$?

Thanks!

1

There are 1 best solutions below

0
On BEST ANSWER

First of all: if $R(\theta, \hat{\theta}) = 0$ then you have a perfect estimator and of course it is minimax. Somehow the book is wrong not accounting for this possibility. Under assumption that $R(\theta, \hat{\theta}) > 0$ the reasoning in the book seems valid, except for the clause 'where the second term vanishes because $R(\theta, \hat{\theta}) = 0$ as $\hat{\theta}$ is unbiased'. Note that the sequel does not use that the second term vanishes, the whole appearance of $1 - c^2$ comes from it not vanishing.

Now for your question: how can $R(\theta, \hat{\theta}) > 0$? I'm not sure but it must have something to do with the following.

$\hat{\theta}$ being unbiased means that $\mathbb{E}\hat{\theta} = \theta$. This does not automatically give you $\mathbb{E}\hat{\theta}^2 = \theta^2$ (in fact the difference $\mathbb{E}\hat{\theta}^2 - \theta^2$ can be rewritten as $\mathbb{E}\hat{\theta}^2 - (\mathbb{E}\hat{\theta})^2$, which famously equals $Var(\hat{\theta})$, showing that it is non-negative). Now if $\mathbb{E}\hat{\theta}^2 = q\theta^2$ for some ratio $q \geq 1$ (depending on $\theta$ in ways we have no control over), the risk function reduces to $R(\theta, \hat{\theta}) = q - 1$ which may but need not be equal to $0$.

So this is not a full answer: I can argue why we should leave open the possibility of $R(\theta, \hat{\theta}) \neq 0$ even in the unbiased case, but somehow the book uses that we really have $R(\theta, \hat{\theta}) > 0$. I'm quite stumped how to conclude that in absence of more information.

ADDED: I think I have an idea what form this extra info might take. I can imagine that we have data drawn from some distribution depending on $\theta$ and then compute $\hat{\theta}_n$ based on the first $n$ data points. If the distribution of the data points themselves has non-zero variance and the computation of $\hat{\theta}$ actually uses the data in a non-trivial way, then $\hat{\theta}$ will have positive variance (and hence a positive value of $q$) as well.