In the proof of the Lehmann-Scheffe Theorem I came across an equality, which I dont understand. It is the last equality at the bottom. But still, I give the setting, since I dont know, if any of that is relevant.
Let $(\chi,\sigma,\mathbb{P}_\theta : \theta \in \Omega)$ be a statistical model and $S:\chi\rightarrow \mathbb{R}$ be a sufficient and complete statistic. And let $T:\chi\rightarrow \mathbb{R}^d$ be an unbiased estimator for the statistical quantity $\tau:\theta \rightarrow \mathbb{R}^d$. By the Rao-Blackwell theorem, we get an improved version $T^*$ of $T$ w.r.t. its variance by setting $$T^* = \mathbb{E}[T\mid\sigma(S)].$$
To show uniqueness of $T^*$, we assume another unbiased estimator $H$ and its Blackwellized version $H^* = \mathbb{E}[H\mid\sigma(S)].$
Then there exist a measurable functions $t,h$ with $$H^*(X)=h(S(X)) \\ T^*(X)=t(S(X)) $$ Then, by their unbiasedness, we can write $$0=\mathbb{E}_\theta[H^*] - \mathbb{E}_\theta[T^*] = \mathbb{E}_\theta[H^* - T^*] = \mathbb{E}_\theta[t(S)-h(S)] = \mathbb{E}_\theta[t-h \mid \sigma(S)]\\ $$
Why does the last equality hold? Is there an intuitive way of explaining that?
That last equality isn't true. Perhaps it's a typographical error and the intended equality is the trivial assertion: $$ E_\theta[t(S)-h(S)]=E_\theta[(t-h)(S)]. $$ In the rightmost expression $t-h$ is a measurable function. Given that $E_\theta[(t-h)(S)]$ equals zero, you conclude by completeness of $S$ that $P_\theta[(t-h)(S)=0]=1$ for all $\theta$. Translating back, this means $T^*$ and $H^*$ coincide with probability one.