I'm reading Bass' book about Stochastic Processes. Since I haven't yet acquired a firm understanding of some basics, I'm currently looking into Conditional Expectation in the Appendix.
The proof of the following proposition (A.21) gives me trouble:
If X and XY are integrable and Y is measureable with respect to $\mathcal F$, then $$ \mathbb E [XY | \mathcal F] = Y \mathbb E[X|\mathcal F]. $$
The proof:
If $A\in\mathcal F$, then for any $B \in \mathcal F$, $$ \mathbb E[\ 1_A \mathbb E [X|\mathcal F];B\ ]=\mathbb E[\ \mathbb E [X|\mathcal F];A\cap B\ ] = \mathbb E [X; A\cap B] = \mathbb E [1_A X; B]. $$ (and more arguments $\dots$)
Three questions:
- Does $\mathbb E[X; A]$ mean $\mathbb E[X 1_A]$ in terms of notation?
- How does the second "$=$" work? What is the argument here?
- After the part of the proof I wrote above, the author then writes that this argument is enough two show the statement for indicator functions (i.e. $Y=1_A$) due to linearity. The original statement does not have the expectation there though, so why is this argument enough? What does linearity have to do with it?
Any help is highly appreciated!
For your first question, yes $\mathbb E[X;A]:\mathbb E[X\boldsymbol 1_A]$. So, $$\mathbb E[\boldsymbol 1_A\mathbb E[X\mid \mathcal F];B]=\mathbb E[\boldsymbol 1_{A}\mathbb E[X\mid \mathcal F]\boldsymbol 1_B]=\mathbb E[\mathbb E[X\mid \mathcal F]\boldsymbol 1_{A\cap B}]=\mathbb E[\mathbb E[X\mid \mathcal F],A\cap B].$$
Therefore, if $Y\geq 0$, there is a a sequence $(Y_n)$ of simple function s.t. $Y_n\nearrow Y$, and thus, for $Y\geq 0$, the formula follow by monotone convergence theorem. Finally if $Y$ is only $\mathcal F-$measurable, then you can write it as $Y^+-Y^-$ where $Y^+=\max\{Y,0\}\geq 0$ and $Y^-=-(\min\{Y,0\})\geq 0$, and thus apply apply the formula to $Y^+$ and $Y^-$.