I was asked to prove the following:
Let $X$ and $Y$ be two r.v.'s and $\mathcal A$ a $\sigma$-subfield of the probability space. $X,\ Y$ are called conditionally independent on $\mathcal A$ if for all measurable functions $f\geq0$ and $g\geq0$, $$E[f(X)g(Y)|\mathcal A]=E[f(X)|\mathcal A]\cdot E[g(Y)|\mathcal A]\quad\text{a.s.}\quad (*)$$ Prove that this definition is equivalent to: For any nonnegative $\mathcal A$-measurable r.v. $Z$ and all measurable functions $f\geq0$ and $g\geq0$, $$E[f(X)g(Y)Z]=E[f(X)ZE[g(Y)|\mathcal A]]\quad (\#)$$
My attempt:
"$\Rightarrow$": First $E[f(X)g(Y)Z]=E[E[f(X)g(Y)Z|\mathcal A]]=E[ZE[f(X)g(Y)|\mathcal A]]$ because $\sigma(Z)\subset\mathcal A$. By $(*)$ we have $$E[f(X)g(Y)Z]=E[ZE[f(X)|\mathcal A]\cdot E[g(Y)|\mathcal A]]=E[E[f(X)|\mathcal A]\cdot ZE[g(Y)|\mathcal A]]$$ Note that $ZE[g(Y)|\mathcal A]$ is $\mathcal A$-measurable, hence $$E[E[f(X)|\mathcal A]\cdot ZE[g(Y)|\mathcal A]]=E[f(X)ZE[g(Y)|\mathcal A]]$$
"$\Leftarrow$": Simply reverse everything above.
Some questions:
Is my proof correct? I am concerned about the integrability. But if we suppose all expectations that turn up exist, if my proof correct? On the other hand, is it possible that we do not have to worry about integrability because $f(X), g(Y), Z$ are all positive?
This definition of conditional independence is pretty different from what one usually sees. I don't really get the motivation of this definition. Can you explain how this is related to the usual definition of conditional independence?