I'm reading Casella & Berger (1987) [1]. On page 107, the following can be found:
We use the notation $Pr_{\pi}(H_0|x)$ to indicate that $\pi$ is the prior used in calculating a posterior probability. Consider the random triple $(A, \theta,x)$ with joint distribution defined by the following. The distribution of $X|\Theta=\theta$ has density $f(x-\theta)$, the distribution of $\Theta|A=\alpha$ is $\pi_{\alpha}$, and the distribution of $A$ is $P$. Then for any $\pi \in \Gamma_M$, $$\begin{align} Pr_{\pi}(H_0|x) & = Pr_{\pi}(\Theta \leq 0 | X = x) \\ &= E_A(Pr(\Theta \leq 0 | A= \alpha, X=x) | X=x))\end{align}$$
Here, $\Gamma_M$ is the mixture of all elements of the set $\Gamma = \{\pi_{\alpha}: \alpha \in \mathcal{A}\}$, which is a class of prior distributions on the real line indexed by the set $\mathcal{A}$. $P$ is some probability measure on $\mathcal{A}$ and $f(x-\theta)$ is symmetric about zero and has monotone likelihood ratio.
I've got a hard time to understand the last equality in the equation stated above, namely:
$$E_A(Pr(\Theta \leq 0 | A= \alpha, X=x) | X=x)) = Pr_{\pi}(\Theta \leq 0 | X = x)$$
Any hints and help would be greatly appreciated.
[1] Casella, G., & Berger, R. L. (1987). Reconciling Bayesian and frequentist evidence in the one-sided testing problem. Journal of the American Statistical Association, 82(397), 106-111.
to show
$$ Pr_{\pi}(\Theta \leq 0 | X = x) = E(Pr(\Theta \leq 0 | A= \alpha, X=x) | X=x))$$
it is better to prove
$$ Pr_{\pi}(\Theta \leq 0 | X ) = E(Pr(\Theta \leq 0 | A, X) | X))$$
start with
$E(P(\Theta \leq 0 | A, X) | X))=E(E(I_{\Theta \leq 0} | A, X) | X))$
$=E(E(I_{\Theta \leq 0} | \sigma(A, X)) | \sigma(X)))$
since $\sigma(X) \subset \sigma(A, X)$ by tower property
$=E(I_{\Theta \leq 0} |\sigma(X))$
$=E(I_{\Theta \leq 0} |X)$
$=P(\Theta \leq 0 |X)$ so
$$ Pr_{\pi}(\Theta \leq 0 | X ) = E(Pr(\Theta \leq 0 | A, X) | X))$$
and in hence
$$ Pr_{\pi}(\Theta \leq 0 | X = x) = E(Pr(\Theta \leq 0 | A= \alpha, X=x) | X=x))$$