Consider two independent normal populations $X\sim N(\mu_1,1)$ and $Y\sim N(\mu_2,1)$,with $\mu_1,\mu_2 \in (-\infty,+\infty)$.
The objective is to establish a decision rule for a comparison between the two means. Suppose the action space $A = \{a_0,a_1\} $where $a_0$ corresponds to the case for $\mu_1\le\mu_2$ and otherwise, $a_1$. The loss function is given as follows:
$L((\mu_1,\mu_2),a_0)=\begin{cases} 0,&if\ \mu_1\le\mu_2\\ \mu_1-\mu_2, &if \mu_1\gt\mu_2; \end{cases}$
$L((\mu_1,\mu_2),a_0)=\begin{cases} \mu_2-\mu_1,&if\ \mu_1\le\mu_2\\ 0, &if \mu_1\gt\mu_2; \end{cases}$
Now given random samples (iid) $X_1,...,X_m$ and $Y_1...,Y_n$ from the above normal populations, respectively, define a decision rule of the following form:
$\delta_c(X_1,...,X_m;Y_1,...,Y_n)=\begin{cases} a_0,&if\ \bar X\le\bar Y+c\\ a_1, &otherwise, \end{cases}$
Consider a class of decison rules given by $D=\{\delta_c(x_1,...,x_m;y_1,...,y_n),c\in (-\infty,+\infty)\}$. Show that in this class D every decision rule is inadmissible.
I have no idea how to start with it. Any hint or help would be appreciated!
I believe you can show that that decision rule is admissible.
Let $a_0 \in\{0,1\}$ and $a_1 = 1-a_0$
Note that the loss in that formulation is:
\begin{equation} a_0(\mu_1-\mu_2)I(\mu_2<\mu_1) + (1-a_0)(\mu_2-\mu_1)I(\mu_2\geq\mu_1) \end{equation}
Then the risk is under the proposed decision rules is: \begin{equation} (\mu_1 - \mu_2)P[\textrm{Type I Error}|c] + (\mu_2 - \mu_1)P[\textrm{Type II Error}|c] \end{equation}
An alternative way to formulate the same decision rule, indexed by $c\in\mathbb{R}^+$, can be constructed using the Karlin-Rubin Theorem.
Let $\mu = \mu_2 - \mu_1$, then the previous decision problem corresponds to deciding between $\mu>0$ or $\mu\leq 0$, and $\bar{Y} - \bar{X} \sim \textrm{N}(\mu,\frac{1}{n_X}+\frac{1}{n_Y})$. This gives us a composite hypothesis with a scalar observation and scalar parameter; satisfying some of the requirements of the theorem.
The other requirement is that for $\mu_a >0$ and $\mu_b\leq 0$ , denoting $\mathcal{L}(\mu|\bar{Y}-\bar{X})=\textrm{N}(\bar{Y}-\bar{X}| (\mu,\frac{1}{n_X}+\frac{1}{n_Y})$, the expression:
\begin{equation} \frac{\mathcal{L}(\mu_b|\bar{Y}-\bar{X})}{\mathcal{L}(\mu_a|\bar{Y}-\bar{X})} \end{equation}
Is monotonically non-decreasing in $\bar{Y}-\bar{X}$, this is true because the normal distribution is in the exponential family.
By the theorem a UMP test is given by $\phi(\bar{X},\bar{Y}|c):\mathbb{R} \rightarrow \{0,1\}$, with $a_0 = \phi(\bar{X},\bar{Y}|c)$ such that:
\begin{equation} \phi(\bar{X},\bar{Y}|c) = I(\bar{Y}-\bar{X}>c) = I(\bar{Y}>\bar{X} + c) = I(\bar{Y}+c>\bar{X}) \end{equation}
Which is almost surely the same decision rule as previously.
Noting that all decision rules under this binary decision space are going to have a type one and type two error. Suppose you introduced an alternative decision rule indexed by some parameter $\gamma$, call it $\chi(\bar{Y},\bar{X}|\gamma)$.
In this setting the risk for $\chi(\bar{Y},\bar{X}|\gamma)$ is going to be:
\begin{equation} (\mu_1 - \mu_2)P[\textrm{Type I Error}|\gamma] + (\mu_2 - \mu_1)P[\textrm{Type II Error}|\gamma] \end{equation}
But for any $\gamma$ and $\mu_1$, $\mu_2$ controlling the type I error at a specified level $\exists c$ such that : \begin{equation} P[\textrm{Type I Error}|\gamma] = P[\textrm{Type I Error}|c] \end{equation}
for which: \begin{equation} P[\textrm{Type II Error}|\gamma] \geq P[\textrm{Type II Error}|c] \end{equation}
because the Karlin-Rubin theorem says that the previously expressed decision decision rule is UMP.
Thus $\forall \mu_1,\mu_2$: $R(\{\mu_1,\mu_2\},\phi)\leq R(\{\mu_1,\mu_2\},\chi)$