In a proof of Proposition 5.6 ("conditional independence, Doob", p. 87) Kallenberg (1997) makes the following move whose justification eludes me: $$E\left[P^\mathcal{G}H;F\cap G\right]=E\left[\left(P^\mathcal{G}F\right)\left(P^\mathcal{G}H\right);G\right]$$ where $F\in\mathcal{F}$, $G\in\mathcal{G}$, $H\in\mathcal{H}$ with $\mathcal{F}$, $\mathcal{G}$, $\mathcal{H}$ being sub-$\sigma$-algebras of the same probability space, $P^\mathcal{G}$ means $P\left(\cdot|\mathcal{G}\right)$, the conditional probability given $\mathcal{G}$ (e.g. $P^\mathcal{G}H$ is the conditional probability of $H$ given $\mathcal{G}$) and $E\left[f;A\right]$ means $E\left[f\mathbb{1}_A\right]$ for any measurable, integrable function $f$ and event $A$.
It is assumed that $\mathcal{F}\perp_\mathcal{G}\mathcal{H}$ (i.e. $\mathcal{F},\mathcal{H}$ are conditionally independent given $\mathcal{G}$).
This move is certainly true (even without the additional assumption of conditional independence) when $F$ equals a member of $\mathcal{G}$ give-or-take a null set, but i fail to see the justification in the general case and would appreciate any help.
Sooo... $X=\mathbf 1_F$ is basically any (bounded) random variable, $Y=\mathbb P(H\mid\mathcal G)\cdot\mathbf 1_G$ is $\mathcal G$-measurable and one wants to check that $$ \mathbb E(XY)=\mathbb E(\mathbb E(X\mid\mathcal G)Y). $$ Rewritten under this form, the identity should be direct, as well as its stronger form $$ \mathbb E(XY\mid\mathcal G)=\mathbb E(X\mid\mathcal G)Y,\quad\text{almost surely}. $$ (Note that the conditional independence hypothesis on $\mathcal F$ is not needed.)