How to prove an estimator is minimax

925 Views Asked by At

Let $X_1,...,X_n$ be iid sample from $U(\theta-1/2,\theta+1/2)$, consider the absolute loss $L(\theta,a)=|\theta-a|$, $\theta\in(-\infty,+\infty)$, $a\in(-\infty,+\infty)$.

Show that $\delta(x_1,...,x_n)=\frac{1}{2}(\min x_i+\max x_i)$ is the minimax estimator of $\theta$.

I've skipped two questions of it, which I have already solved. And the above is the third one. What I have done is that I found a sequence of prior $\Pi_\alpha(\theta)$ being $U(-\alpha,\alpha)$, and its Bayes solution $\delta_{\Pi_\alpha}=\frac{1}{2}(\max\{x_{(n)}-1/2,-\alpha\}+\min\{x_{(1)}+1/2,\alpha\})$, and I also show that this Bayes solution $\delta_{\Pi_\alpha}\rightarrow\frac{1}{2}(\min x_i+\max x_i)$. But I have no idea how to calculate the Bayes risk of $\delta_{\Pi_\alpha}$,and then I can use the Thm2 in https://en.wikipedia.org/wiki/Minimax_estimator to solve this problem.

Any hint would be appreciated.

1

There are 1 best solutions below

2
On

We can use the corollary from Theorem 1 on that page (Bayes estimator with constant risk is minimax).

We first have to show that $\frac{1}{2}[\min\{X_i\}_{i=1}^n+\max\{X_i\}_{i=1}^n]$ is a Bayes estimator.

We can't do this using a proper prior, but using $\pi(\theta)\propto 1$, we can get a proper posterior distribution:

\begin{equation} \begin{split} P[\theta | \{X_i\}_{i=1}^n] &= \frac{\prod_{i=1}^n(x_i I_{(x_i \in [\theta-\frac{1}{2},\theta+\frac{1}{2}])})}{\prod_{i=1}^n(x_i I_{(x_i \in [\theta-\frac{1}{2},\theta+\frac{1}{2}])})(\max\{X_i\}_{i=1}^n - \min\{X_i\}_{i=1}^n)}\\ & = \frac{1}{(\max\{X_i\}_{i=1}^n - \min\{X_i\}_{i=1}^n)} \end{split} \end{equation}

Which integrates to $1$ only for $\theta \in [\min\{X_i\}_{i=1}^n,\max\{X_i\}_{i=1}^n]$. Thus $\theta|\{X_i\}_{i=1}^n \sim \textrm{Unif}(\min\{X_i\}_{i=1}^n,\max\{X_i\}_{i=1}^n)$.

Note that $\min_a E_{\Pi(\theta)}[|\theta-a|]$ has a solution $a$ given by the median of $\Pi(\theta)$.

So the specified estimator is a Bayes estimator.

By the corollary, if a Bayes estimator has constant risk it is minimax. Examining the risk, we get:

\begin{equation} R(\theta,\delta) = E_{\delta}[|\theta-\delta|] = P[\delta<\theta]E_{\delta|\delta<\theta}[\theta-\delta] + p[\delta>\theta]E_{\delta|\delta>\theta}[\delta-\theta] \end{equation}

Representing \begin{equation} \begin{split} \delta & = \frac{1}{2}[\min\{X_i\}_{i=1}^n + \max\{X_i\}_{i=1}^n]\\ & = \theta - \frac{1}{2} + \frac{1}{2}[\min\{U_i\}_{i=1}^n + \max\{U_i\}_{i=1}^n] \end{split} \end{equation}

Where $U_i$ are iid $Unif(0,1)$ random variables. The order statistics are given by: $\min\{U_i\}_{i=1}^n \sim \textrm{Beta}(1,n)$ and $\max\{U_i\}_{i=1}^n \sim \textrm{Beta}(n,1)$.

These two distributions are reflections of each other around $\frac{1}{2}$, which means:

\begin{equation} \begin{split} \theta - \frac{1}{2} + \frac{1}{2}[\min\{U_i\}_{i=1}^n + \max\{U_i\}_{i=1}^n] \end{split} \end{equation}

Is symmetric about $\theta$; thus $P[\delta<\theta] = P[\delta>\theta]=c$.

Looking at our previous expectation gives us:

\begin{equation} R(\theta,\delta) = E_{\delta}[|\theta-\delta|] = c(E_{\delta|\delta<\theta}[\theta-\delta] + E_{\delta|\delta>\theta}[\delta-\theta]) = c(E_{\delta|\delta<\theta}[-\delta] + E_{\delta|\delta>\theta}[\delta]) \end{equation}

Which is constant with respect to $\theta$.