A decision rule $\delta$ is said to be unbiased if $\mathbb E_\theta[L(\theta^\prime,\delta]\geq\mathbb E_\theta[L(\theta,\delta]$ for all $\theta,\theta^\prime\in\Theta$.
In the context of testing theory, a test $\delta$ and risk given by $$R(\theta,\delta) = \mathbb E_\theta[L(\theta,\delta)] = \ell_0\chi_{\Theta_0}(\theta)E_\theta[\delta] + \ell_1\chi_{\Theta_1}(\theta)(1-E_\theta[\delta]),$$ where $\ell_1,\ell_0>0$ are constants, $\Theta = \Theta_0\cup\Theta_1$ (corresponding to the hypotheses H0 and H1) and $\chi$ denotes the characteristic function. $\delta$ is said to be unbiased if $E_\theta[\delta]\leq\alpha$ for all $\theta\in\Theta_0$ and $E_\theta[\delta] \geq\alpha$ for all $\theta\in\Theta_1$, $\alpha :=\frac{\ell_1}{\ell_1+\ell_0}$.
Question: Both statements are equivalent but how can I prove it? I tried both directions but I had no success. I believe there is some trick which I am not aware of (because there is a similar result regarding estimators; the trick there is to use the variance-bias formula and don't prove $\Rightarrow$ and $\Leftarrow$ seperately).
I found an answer to the question: It is immediate obvious actually (well, it wasn't for me in the first place) that if $$\mathbb E_\theta[L(\theta^\prime,\delta)]\geq\mathbb E_\theta[L(\theta,\delta)]$$ for all $\theta,\theta^\prime\in\Theta$ is equivalent to
The first two are statements are trivial and thus omitted. The last two statements are equivalent to $\mathbb E_\theta[\delta]\geq\alpha$ for all $\theta\in\Theta_1$ and $\mathbb E_\theta[\delta]\leq\alpha$ for all $\theta\in\Theta_0$.