A generalization of conditional expectation to non-integrable random variables

170 Views Asked by At

I'm reading about conditional expectations from Achim Klenke's Probability Theory: A Comprehensive Course, which defines conditional expectations for integrable random variables $X$ with respect to a $\sigma$-algebra $\mathcal F$ as the unique (a.s.) $\mathcal F$-measurable random variable $Y$ for which $\mathbb E[X\mathbb 1_A] = \mathbb E[Y\mathbb 1_A]$ for all $A \in \mathcal F$. The following remark is made to generalize this definition to some non-integrable random variables:

Let $X : \Omega \to \mathbb R$ be a random variable such that $X^- \in \mathcal L^1(\mathbb P)$. We can define the conditional expectation as the monotone limit $$ \mathbb E[X\,|\,\mathcal F] = \lim_{n \to \infty} \mathbb E[X_n\,|\,\mathcal F] $$ where $-X^- \leq X_1$ and $X_n \uparrow X$. Due to the monotonicity of the conditional expectation, it is easy to show that the limit does not depend on the choice of the sequence $(X_n)$ and that it fulfills the conditions of [the definition of conditional convergence].

(Presumably we want $(X_n) \subset \mathcal L^1(\mathbb P)$ to avoid circular definitions.)

My question: Why do we need $-X^- \leq X_1$?

It's not hard to verify these claims. By monotonicity of conditional expectation, we get that $\mathbb E[X_n\,|\,\mathcal F] \uparrow \mathbb E[X\,|\,\mathcal F]$ (this limit may be infinite). So for any $A \in \mathcal F$, by the Beppo-Levi monotone convergence theorem, since the $\mathbb E[X_n\,|\,\mathcal F]$ are integrable, $$ \mathbb E[\mathbb E[X\,|\,\mathcal F]\mathbb 1_A] = \mathbb E\left[\lim_{n \to \infty} \mathbb E[X_n\,|\,\mathcal F]\mathbb 1_A\right] = \lim_{n \to \infty} \mathbb E\left[\mathbb E[X_n\,|\,\mathcal F]\mathbb 1_A\right] = \lim_{n \to \infty} \mathbb E[X_n\mathbb 1_A] = \mathbb E[X\mathbb 1_A]. $$ If $(\tilde X_n)$ is another sequence with $\tilde X_n \uparrow X$ and $-X^- \leq \tilde X_1$, letting $Y = \lim \mathbb E[X_n \,|\,\mathcal F]$ and $\tilde Y = \lim\mathbb E[\tilde X_n\,|\,\mathcal F]$, then both $Y$ and $\tilde Y$ are $\mathcal F$-measurable (as a limit of $\mathcal F$-measurable functions), and this calculation shows $\mathbb E[Y\mathbb 1_A] = \mathbb E[\tilde Y \mathbb 1_A]$ for every $A \in \mathcal F$. It follows that $Y = \tilde Y$.

I can see that we'd want $X^- \in \mathcal L^1(\mathbb P)$ because otherwise we can't have that both $(X_n) \in \mathcal L^1(\mathbb P)$ and $X_n \uparrow X$. But I don't see why we need $-X^- \leq X_1$. In fact this seems contradictory if we require $X_n \uparrow X$. Am I missing something?

1

There are 1 best solutions below

2
On

Since your random variables may take negative values you need a generalized version of the monotone convergence theorem. Such a result exists which, in addition to $X_n\uparrow X$, assumes that $X_1^-$ is integrable. We ensure that this is satisfied by assuming $-X^-\leq X_1$.

Note that you can construct a sequence $(X_n)$ with the desired properties by taking $X_n=X1_{\{X\leq n\}}$.