I am having trouble understanding some things about the following example, taken from the book "The Bayesian Choice" by Christian P.Robert.
My issue begins with "its risk function is"
Previously we had defined the risk function to be
$$\int_{X} L(\theta,\delta(x))f(x|\theta)dx$$
I am having trouble seeing how this leads to the bottom result.
We have two observations so at first I would think we would have a sum with two terms. I guess because the estimator takes two values we only have one term in the sum? Where do we account for $f(x|\theta)$? Why do we set it equal to $\theta$? is it because we are estimating $\theta$ and so we only consider the cases that gives $\theta?$ I understand the rest, for example if they are not equal then the estimator gives $\theta$ and if they are equal it does not. But I dont understand why the risk function then has a Probability symbol in its calculation.

Let $x=(x_1, x_2)$,
\begin{align} R(\theta, \delta_0(x)) &= \int_X L(\theta, \delta_0(x))f(x|\theta) \,dx \\ &=\mathbb{E} \left[ 1-\mathbb{I}_\theta(\delta_0(X))\right] \\ &= 1-\mathbb{E}\left[\mathbb{I}_\theta(\delta_0(X))\right] \\ &= 1-Pr(\delta_0(X) =\theta) \\ &= 1-Pr\left( \frac{X_1+X_2}{2} = \theta \right) \\ &= 1-Pr(X_1 \ne X_2) \\ &=0.5 \end{align}
where I have used expectation of an indicator variable is equal to the probability that it takes value $1$.
$$E(I_A) = Pr(A)\cdot 1 + Pr(A^c)\cdot 0=Pr(A). $$