I'm reading Steven E. Shreve's "Stochastic Calculus for Finance II, Continuous-Time models", and a bit confused on the Independence Lemma (Lemma 2.3.4). The lemma says:
Lemma 2.3.4 (Independence). Let $(\Omega,\mathscr{F},\mathbb{P})$ be a probability space and let $\mathscr{G}$ be a sub-$\sigma$-algebra of $\mathscr{F}$. Suppose the random variables $X_1,\dots,X_K$ are $\mathscr{G}$-measurable and the random variables $Y_1,\dots,Y_L$ are independent of $\mathscr{G}$ . Let $f(x_1,\dots,x_K,y_1,\dots,y_L)$ be a function of the dummy variables $x_1,\dots,x_K$ and $y_1,\dots,y_L$ and define $$g(x_1,\dots,x_K) = \mathbb{E}f(x_1,\dots,x_k,Y_1,\dots,Y_L).$$ (2.3.27). Then $$\mathbb{E}[f(X_1,\dots,X_K,Y_1,\dots,Y_L)\mid \mathscr{G}]=g(X_1,\dots,X_K). $$ (2.3.28)
Then the book further explains that
... As with Lemma 2.5.3 of Volume I, the idea here is that since the information in $\mathscr{G}$ is sufficient to determine the values of $X_1,\dots,X_K$, we should hold these random variables constant when estimating $f(X_1,\dots,X_K,Y_1,\dots,Y_L)$. The other random variables, $Y_1,\dots,Y_L$, are independent of $\mathscr{G}$ , and so we should integrate them out without regard to the information in $\mathscr{G}$ . These two steps, holding $X_1,\dots,,X_K$ constant and integrating out $Y_1,\dots,Y_L$, are accomplished by (2.3.27). We get an estimate that depends on the values of $X_1,\dots,X_K$ and, to capture this fact, we replaced the dummy (nonrandom) variables $x_1,\dots,x_K$ by the random variables $X_1,\dots,X_K$ at the last step. Although Lemma 2.5.3 of Volume I has a relatively simple proof, the proof of Lemma 2.3.4 requires some measure-theoretic ideas beyond the scope of this text, and will not be given.
OK.. I'm confused here... Is this "Independence Lemma" non-trivial?
In my mind I just think-- Since (2.3.27): $$g(x_1,\dots,x_K) = \mathbb{E}f(x_1,\dots,x_k,Y_1,\dots,Y_L).$$ , we have $$g(X_1,\dots,X_K) = \mathbb{E}f(X_1,\dots,X_K,Y_1,\dots,Y_L) . $$ , hence we get (2.3.28): $$\mathbb{E}[f(X_1,\dots,X_K,Y_1,\dots,Y_L)\mid \mathscr{G}]=g(X_1,\dots,X_K). $$
I don't understand why we need a lemma here to iterate something quite "straight-forward" and by instinct right.
I guess I must neglect something. There must be something non-trivial but I took for granted. What is that?
You have to be careful with respect to which variable you integrate: By definition,
$$g(x_1,\ldots,x_k) = \mathbb{E}f(x_1,\ldots,x_K,Y_1,\ldots,Y_L) = \int_\Omega f(x_1,\ldots,x_K,Y_1(\omega_Y),\ldots,Y_L(\omega_Y)) \, d\mathbb{P}(\omega_Y).$$
Hence,
$$g(X_1,\ldots,X_K)(\omega_\mathscr{G}) = \int f(X_1(\omega_\mathscr{G}),\ldots,X_K(\omega_\mathscr{G}),Y_1(\omega_Y),\ldots,Y_L(\omega_Y)) \, d\mathbb{P}(\omega_Y).$$
This means that we integrate with respect to the variable $\omega_Y$ whereas $\omega_\mathscr{G}$ s still fixed. In contrast,
$$\begin{align} \mathbb{E}[f(X_1,\ldots,X_K,Y_1,\ldots,Y_L) ] &= \int f(X_1(\omega_\mathscr{G}),\ldots,X_K(\omega_\mathscr{G}),Y_1(\omega_\mathscr{G}),\ldots,Y_L(\omega_\mathscr{G})) \, d\mathbb{P}(\omega_\mathscr{G}) \\ &\neq \mathbb{E}[g(X_1,\ldots,X_K)] \\ &= \int_\Omega d\mathbb{P}(\omega_\mathscr{G}) \int_\Omega f(X_1(\omega_\mathscr{G}),\ldots,X_K(\omega_\mathscr{G}),Y_1(\omega_Y),\ldots,Y_L(\omega_Y)) \, d\mathbb{P}(\omega_Y). \end{align}$$
Similar, considerations hold for the conditional expectations. Therefore, the "Independence Lemma" is not an obvious conclusion from the definition of $g$.