I am reviewing the proof about the conditional expectation of $e$ (Conditional Expectation Function Error) given $X$ is zero. This is: $$e= Y-m(X)$$ $$E(e|X)=E(Y|X)-E(m(X)|X)$$ $$E(e|X)=E(Y|X)-E(Y|X)$$ $$E(e|X)=0$$
where $m(x)$ is $E(Y|X)$. X and Y are random variables.
When I check the justification for why $E(m(X)|X) = 0$ I find two types of proof:
- Conditioning Theorem:
If $E|g(x)y|<\infty$ then $E(g(x)y|x)=g(x)E(y|x)$.
Proof: $$E(g(x)y|x)=g(x)E(y|x)=\int_\infty^\infty g(x)y f(y|x)dy=g(x)\int_\infty^\infty y f(y|x)dy=g(x)E(y|x)$$
- Stability Conditional Expectation:
If X is a random variable, then $E(f(X)∣X)=f(X)$
Proof:
If $f(X)$ is $σ(X)$ measureble, then it fulfills the three properties of the definition of conditional expectation , by the uniqueness the almost surely equality is obtained.
My question is: What is the relationship between these two proofs? Which is more rigorous? Which is more appropriate to the context of the problem I am dealing with?
The second one is the right one. It works in general. The first one assumes existence of densities etc. Avoid using density functions unless you are told that the random variables have absolutely continuous distributions.