Conditional expectation of $h(X,Y)$, where $h$ is measurable and bounded

415 Views Asked by At

This a follow-up to a question I had posted earlier.

Let $X$ and $Y$ be $\mathbb{R}^n$- and $\mathbb{R^m}$-valued random variables on the probability space $(\Omega, \mathcal{F}, P)$. Further assume that $X$ is $\mathcal{F}_1$-measurable and $Y$ is $\mathcal{F}_2$-measurable, where $\mathcal{F}_1$ and $\mathcal{F}_2$ are independent sub-$\sigma$-algebras of $\mathcal{F}$. Let $h : \mathbb{R^n} \times \mathbb{R^m} \rightarrow \mathbb{R}$ be $\mathcal{B}( \mathbb{R^n} ) \otimes \mathcal{B}( \mathbb{R^m} ) - \mathcal{B}(\mathbb{R})$-measurable and bounded. I want to show that

$$ E[h(X, Y) \mid \mathcal{F_1}] (\omega) = E[h (x, Y)] \big|_{x = X(\omega)}. $$

With the help of the answer to my previous question I have managed to show this relation for $h = 1_D (x ,y)$, $D \in \mathcal{B}( \mathbb{R^n} ) \otimes \mathcal{B}( \mathbb{R^m} )$. By the definition of the conditional expectation this means that $$ \int_{F} 1_D (X, Y) dP = \int_F E[1_D (x, Y)] \big|_{x = X} dP, \quad \text{for any } F \in \mathcal{F}_1. $$

By linearity, this is extended to all positive simple functions. I would like to extend the result to positive measurable functions and then to all bounded and measurable functions.


Some thoughts:

Let $h(x, y)$ be positive, measurable and bounded.

Then there exists an increasing sequence of positive simple functions $h_n ( x, y )$ converging pointwise to $h(x, y)$. Then also $h_n (X(\omega), Y(\omega)) \uparrow h( X(\omega), Y(\omega))$. Then using the monotone convergence theorem we can write \begin{align} \int_{F} h (X, Y) dP &= \int_{F} \lim_{n \rightarrow \infty} h (X, Y) dP = \lim_{n \rightarrow \infty} \int_{F} h (X, Y) dP \\ &= \lim_{n \rightarrow \infty} \int_F E[h_n (x, Y)] \big|_{x = X} dP = \ldots \end{align}

To be able to apply the monotone convergence theorem one would need to show that $$ \tag{1} E[h_n (x, Y)] \big|_{x = X(\omega)} \uparrow E[h (x, Y)] \big|_{x = X(\omega)}, \quad \text{for every } \omega \in \Omega \text{ (pointwise)}, $$ probably using that $$ h_n (x, Y(\omega')) \big|_{x = X(\omega)} \uparrow h (x, Y(\omega')) \big|_{x = X(\omega)} \quad \text{for every } (\omega, \omega') \in \Omega \times \Omega \text{ (pointwise)}. $$

Let $h(x, y)$ be measurable and bounded.

Write $h (X, Y) = h^{+} (X, Y) - h^{-} (X, Y)$. Then

\begin{align} \tag{2} \int_F h ( X, Y ) dP &= \int_F h^{+} ( X, Y ) dP - \int_F h^{-} ( X, Y ) dP \\ &= \int_F E[h^{+} (X(\omega), Y(\omega'))] d P ( \omega ) - \int_{F} E[h^{-} (X(\omega), Y(\omega'))] d P ( \omega ) \\ &= \int_F E[ h^{+} (X(\omega), Y(\omega')) - h^{-} (X(\omega), Y(\omega'))] d P(\omega)\\ &= \int_F E[ h (X(\omega), Y(\omega')) d P ( \omega ) \end{align}


As long as $(2)$ is correct, how can one show $(1)$? I would be interested in the different ways of showing $(2)$, if there are such. A self-contained proof would be best. If a certain proof relies on known results, then a reference would be nice. Thanks.

2

There are 2 best solutions below

0
On BEST ANSWER

Your (1) is nothing but another application of the monotone convergence theorem. Fix $x$ for the moment. As you say, for each $\omega'$ we have $h_n(x, Y(\omega')) \uparrow h(x, Y(\omega'))$. Therefore by the monotone convergence theorem, we have $E[h_n(x, Y)] \uparrow E[h(x,Y)]$. Now $x$ was arbitrary, so this is true for every $x$; in particular it is true when $x = X(\omega)$, for any $\omega$.

Then (2) is just the linearity of the integral.

3
On

It looks like to me that you are repeating some steps in proving Fubini's theorem (or the change of variables theorem). It can be much easier if you base your results on those theorems.

Let $g(x)=E[h (x, Y)]$. It is sufficient to prove that for any $A \in \mathcal{F}_1$, $$ \int_A g(X)dP = \int_A h(X, Y) dP. $$

Let $Z=1_A$, then $Z$ is $\mathcal{F}_1$ measurable. Let the joint distribution of $X, Z$ be $\mu$ and the distribution of $Y$ be $\nu$.

Then $$\int_A h(X, Y) dP = \int h(X, Y)Z dP = E[h(X, Y)Z] = \int\int h(x,y)zd\mu(x,z)d\nu(y).$$

In the previous steps, we first change the integration variable space from $\Omega$ to $\mathcal{R}^{n+m+1}$ product space. Since $\mathcal{F}_1$ and $\mathcal{F}_2$ are indepedent, the induced measure on $\mathcal{R}^{n+m+1}$ is the product of the two measures $d\mu(x,z)$ and $d\nu(y)$. Further application of Fubini's theorem leads to $$\int_A h(X, Y) dP = \int\left(\int h(x,y)\nu(dy)\right)zd\mu(x,z) = \int E[h (x, Y)]zd\mu(x,z) = \int g(x)zd\mu(x,z) = \int g(X)1_AdP = \int_A g(X)dP. $$

In the final two steps, we changed the integration variable space back to $\Omega$.