$E[f(X)|\mathcal{Q}]=\int_{0}^{+\infty}f'(x)P(X>x|\mathcal{Q})dx$ a.s

184 Views Asked by At

Let $(\Omega,\mathcal{F},P)$ be a probability space, $\mathcal{Q} \subset \mathcal{F}$ be a $\sigma$-algebra on $\Omega.$ Let $f:\mathbb{R}^+ \rightarrow \mathbb{R}^+,$ a function of class $C^1,$ non-decreasing, $f(0)=0.$ Consider a random variable $X$ taking values in $\mathbb{R}^+.$

Prove that $$E[f(X)|\mathcal{Q}]=\int_{0}^{+\infty}f'(x)P(X>x|\mathcal{Q})dx \ \ \ \ a.s.$$

The first thing we need to prove is that $\int_{0}^{+\infty}f'(x)P(X>x|\mathcal{Q})dx$ is $\mathcal{Q}$-measurable and then for all $E \in \mathcal{Q},$ $$\int_E (\int_0^{+\infty}f'(x)P(X>x)dx)dP=\int_Ef(X)dP$$

If $(x,w) \rightarrow f'(x)P(X>x|\mathcal{Q})(w)$ is $(B(\mathbb{R}^+) \times \mathcal{Q})$-measurable, then it's over using Fubini.

So is it true that $(x,w) \rightarrow f'(x)P(X>x|\mathcal{Q})(w)$ is $(B(\mathbb{R}^+) \times \mathcal{Q})$-measurable? Should we consider approximation for $X=1_E$ where $E \in \mathcal{F}$? If not is a $\pi$-$\lambda$ system argument possible?

5

There are 5 best solutions below

4
On BEST ANSWER

Let $Y$ be a bounded $\mathscr{Q}$-measurable random variable. Since $(t,\omega)\mapsto 1_{t<X(\omega)}f'(t)$ is $(\mathscr{B}(\mathbb{R}^+)\otimes \mathscr{F}) $-measuarable and nonnegative, and \begin{align*} \mathsf{E}[f(X)Y]&=\mathsf{E}\Big[\int_{0}^{\infty}1_{t<X(\omega)}f'(t)\,dt\cdot Y(\omega)\Big]\\ &=\int_{0}^{\infty}\mathsf{E}[1_{t<X(\omega)}Y]f'(t)\,dt, \qquad \forall Y\in b\mathscr{Q}, \end{align*} then \begin{align*} \mathsf{E}[f(X)|\mathscr{Q}] &=\mathsf{E}\Big[\int_{0}^{\infty}1_{t<X(\omega)}f'(t)\,dt\Bigm|\mathscr{Q}\Big]\\ &=\int_{0}^{\infty}\mathsf{P}(X>t|\mathscr{Q})f'(t)\,dt . \end{align*}

Remark: The existence of $(t,\omega)$-measurable version of $\mathsf{E}[1_{X>t}|\mathscr{Q}]$ could be deducted from the existence of regular conditional probability $\mathsf{P}_X(B|\mathscr{Q})$ and by a $\pi-\lambda$ system argument. Please cf. Galen R. Shorack, Probability for Statisticians, 2nd Ed. Springer International Publishing, 2017. Sec.7.5, p.143.

3
On

Hint for measurability: Let $F(x,\omega)$ be measurable in $\omega$ and right-continuous in $x$. Then $F$ is jointly measurable. Proof: $F(x,y)=\lim F(\frac {[2^{n}x] } {2^{n}},y)$ and $F(\frac {[2^{n}x] } {2^{n}},y)$ is jointly measurable for each $n$. [Of course, $f'(x) can be separated out since produts of measurable functions are measurable].

0
On

The following is NOT a proof nor a solution.

It seems that right-continuity cannot be proved. Let me elaborate a bit regarding the measurability of the map $(x,\omega)\mapsto P\left([X>x]\mid\mathcal{G}\right)(\omega)$. Recall the conditional expectation, in general, is not unique but just unique up to a.e.. For each $x$, we have many choice for the conditional expectation $\omega\mapsto P\left([X>x]\mid\mathcal{G}\right)(\omega)$. Let $\mathcal{C}_{x}$ be the set of all possible candidates of the conditional expectation $P\left([X>x]\mid\mathcal{G}\right)$, which is non-empty. By the Axiom of Choice, there exists a map $\theta:[0,\infty)\rightarrow\bigcup_{x}\mathcal{C}_{x}$ such that $\theta(x)\in\mathcal{C}_{x}$. Define $F:[0,\infty)\times\Omega\rightarrow\theta(x)(\omega)$. The map $F$ is informally written as $(x,\omega)\mapsto P\left([X>x]\mid\mathcal{G}\right)(\omega)$. We go to prove the following:

  1. For each $\omega\in\Omega$, $F(\cdot,\omega)$ is right-continuous.

Proof of (1): Let $x\in[0,\infty)$ be fixed. Let $(x_{n})$ be an arbitrary sequence of real numbers such that $x_{1}>x_{2}>\ldots>x$ and $x_{n}\rightarrow x$. To show that $F(\cdot,\omega)$ is right-continuous at $x$, it suffices that $F(x_{n},\omega)\rightarrow F(x,\omega)$ (a theorem in elementary analysis due to Heine). Observe that $[X>x_{1}]\subseteq[X>x_{2}]\subseteq\ldots$, so the sequence of random variables $(1_{[X>x_{n}]})_{n}$ is monotonic increasing. It is easy to prove that $1_{[X>x_{n}]}\rightarrow1_{[X>x]}$ pointwisely. (For, let $\omega\in\Omega$. Consider two cases. Case 1: $X(\omega)>x$. In this case, $1_{[X>x]}(\omega)=1$. Since $X(\omega)>x$, and $x_{n}\rightarrow x$, there exists $N$ such that $X(\omega)>x_{n}$ whenever $n\geq N$. For $n\geq N$, we have $1_{[X>x_{n}]}(\omega)=1$, so $\lim_{n}1_{[X>x_{n}]}(\omega)=1$. Case 2: $X(\omega)\leq x$. Clearly, $\lim_{n}1_{[X>x_{n}]}(\omega)=1_{[X>x]}(\omega)=0$) By Monotone Convergence Theorem, we have $E\left(1_{[X>x_{n}]}\mid\mathcal{G}\right)\rightarrow E\left(1_{[X>x]}\mid\mathcal{G}\right)$ (a.e.). From this point, we CANNOT argue as follow: Let $\omega\in\Omega$, then $E\left(1_{[X>x_{n}]}\mid\mathcal{G}\right)(\omega)\rightarrow E\left(1_{[X>x]}\mid\mathcal{G}\right)(\omega).$

It seems that the right-continuitty of $F(\cdot,\omega)$ cannot be proved because it relies on the choice function $\theta$. For example, for one choice $\theta$, we can make $F(\cdot,\omega)$ right-continuous for all $\omega$. However, for another choice $\theta'$, $F(\cdot,\omega)$ may fail to be right-continuous.

A reasonable question is: Is it possible to choose $\theta$ such that $F(\cdot,\omega)$ defined in above is right-continuous?

3
On

We claim: For each random variable $X:\Omega\rightarrow[0,\infty)$, there exists a version of condition expectation $E\left(1_{[X\leq x]}\mid\mathcal{G}\right)$ such that the map $[0,\infty)\times\Omega\rightarrow\mathbb{R}$, $(x,\omega)\mapsto E\left(1_{[X\leq x]}\mid\mathcal{G}\right)(\omega)$ is jointly measurable. (For the meaning of a version of condition expectation $E\left(1_{[X\leq x]}\mid\mathcal{G}\right)$, where the Axiom of Choice is invoked, please read my other post.)

Proof: Step 1: $X$ is a simple function. Suppose that $X=\sum_{i=1}^{n}\alpha_{i}1_{A_{i}}$, where $0\leq\alpha_{1}<\alpha_{2}<\ldots<\alpha_{n}$ and $A_{1},\ldots,A_{n}\in\mathcal{F}$ are pairwisely disjoint and $\cup_{i=1}^{n}A_{i}=\Omega$. Fix $(x,\omega)\in[0,\infty)\times\Omega$. Observe that $$ 1_{[X\leq x]}(\omega)=\begin{cases} 1_\emptyset(\omega), & \mbox{if }x\in[0,\alpha_{1})\\ 1_{A_{1}}(\omega), & \mbox{if }x\in[\alpha_{1},\alpha_{2})\\ 1_{A_{1}\cup A_{2}}(\omega), & \mbox{if }x\in[\alpha_{2},\alpha_{3})\\ \vdots & \vdots\\ 1_\Omega(\omega), & \mbox{if }x\in[\alpha_{n},\infty) \end{cases}. $$ Therefore $1_{[X\leq x]}=\sum_{i=1}^{n+1}1_{[\alpha_{i-1},\alpha_{i})}(x)1_{B_{i-1}},$ where $\alpha_{0}:=0$, $\alpha_{n+1}:=\infty$, $B_{0}=\emptyset$, $B_{1}=A_{1}$ etc. It follows that for each $x\in[0,\infty)$, $$ E\left(1_{[X\leq x]}\mid\mathcal{G}\right)=\sum_{i=1}^{n+1}1_{[\alpha_{i-1},\alpha_{i})}(x)E\left(1_{B_{i}}\mid\mathcal{G}\right)(a.e.) $$ Clearly, regardless of the choice of conditional expectation $E\left(1_{B_{i}}\mid\mathcal{G}\right)$, the map $(x,\omega)\rightarrow\sum_{i=1}^{n+1}1_{[\alpha_{i-1},\alpha_{i})}(x)E\left(1_{B_{i}}\mid\mathcal{G}\right)(\omega)$ is $\mathcal{B}([0,\infty))\otimes\mathcal{G}$-measurable. Since $0\leq1_{B_{i}}\leq1$, we have $0\leq E\left(1_{B_{i}}\mid\mathcal{G}\right)\leq1$ (a.e.). We choose a version such that $0\leq E\left(1_{B_{i}}\mid\mathcal{G}\right)\leq1$ everywhere. In short, $E\left(1_{[X\leq x]}\mid\mathcal{G}\right)$ can be chosen such that $0\leq E\left(1_{[X\leq x]}\mid\mathcal{G}\right)\leq1$ everywhere.

Step 2: $X$ is a non-negative random variable. Choose a sequence of simple random variables $(X_{n})_{n}$ such that $0\leq X_{1}\leq X_{2}\leq\ldots\leq X$ and $X_{n}\rightarrow X$ pointwisely. Fix $x\in[0,\infty)$. Observe that $1_{[X_{n}\leq x]}\rightarrow1_{[X\leq x]}$ pointwisely. (For, let $\omega\in\Omega$. If $X(\omega)\leq x$, then $X_{n}(\omega)\leq x$ for all n, so $1_{[X\leq x]}(\omega)=1_{[X_{n}\leq x]}(\omega)=1$ for all $n$. If $X(\omega)>x$, then there exists $N$ such that $X_{n}(\omega)>x$ for all $n\geq N$. It follows that $1_{[X_{n}\leq x]}(\omega)\rightarrow0=1_{[X\leq x]}(\omega)$.) By Dominated Convergence Theorem (conditional expectation version), $E\left(1_{[X_{n}\leq x]}\mid\mathcal{G}\right)\rightarrow E\left(1_{[X\leq x]}\mid\mathcal{G}\right)$ (a.e.). For clarity, denote a jointly measurable choice of $(x,\omega)\mapsto E\left(1_{[X_{n}\leq x]}\mid\mathcal{G}\right)(\omega)$ by $F_{n}(x,\omega)$, with $0\leq F_{n}\leq1$ everywhere. We have proven that for each $x\in[0,\infty)$, $\lim_{n}F_{n}(x,\cdot)$ converges a.e. to a version of condition expectation $E\left(1_{[X\leq x]}\mid\mathcal{G}\right)$. Now define $G:[0,\infty)\times\Omega\rightarrow[0,1]$ by $G(x,\omega)=\limsup_{n}F_{n}(x,\omega).$ Since $F_{n}\rightarrow G$ pointwisely, $G$ is $\mathcal{B}([0,\infty)\otimes\mathcal{G}$-measurable. Moreover, $0\leq G\leq1$. For each $x\in[0,\infty)$, $G(x,\cdot)$ is a version of the conditional expectation $E\left(1_{[X\leq x]}\mid\mathcal{G}\right)$.

Step 3: Finally, observe that $E\left(1_{[X>x]}\mid\mathcal{G}\right)=1-E\left(1_{[X\leq x]}\mid\mathcal{G}\right)$ (a.e.), so $1-G$ is a jointly measurable version for $E\left(1_{[X>x]}\mid\mathcal{G}\right)$.

3
On

To clarify the situation, let us rephrase the question as follow:

Let $F:[0,\infty)\times\Omega\rightarrow[0,1]$ be a $\mathcal{B}([0,\infty))\otimes\mathcal{G}$-measurable function such that for each $x\in[0,\infty)$, $F(x,\cdot)$ is a version of the conditional expectation $E\left(1_{[X>x]}\mid\mathcal{G}\right)$. Let $f:[0,\infty)\rightarrow[0,\infty)$ be a $C^{1}$ increasing function with $f(0)=0$. Prove that $E\left[f(X)\mid\mathcal{G}\right]=\int_{0}^{\infty}f'(x)F(x,\cdot)dx$. (a.e)

Since $(x,\omega)\mapsto f'(x)F(x,\omega)$ is non-negative, $\mathcal{B}([0,\infty))\otimes\mathcal{G}$-measurable and the measures under consideration are $\sigma$-finite, Tonelli Theorem is applicable. By Tonelli Theorem, the map $\omega\mapsto\int_{0}^{\infty}f'(x)F(x,\omega)dx$ is $[0,\infty]$-valued and $\mathcal{G}$-measurable. Such map is denoted by $\int_{0}^{\infty}f'(x)F(x,\cdot)dx$. To prove that $E\left[f(X)\mid\mathcal{G}\right]=\int_{0}^{\infty}f'(x)F(x,\cdot)dx$, we only need to show that for each $A\in\mathcal{G}$, $$ \int_{A}f(X)dP=\int_{A}\left(\int_{0}^{\infty}f'(x)F(x,\cdot)dx\right)dP. $$ This is easy. For, \begin{eqnarray*} \int_{A}\left(\int_{0}^{\infty}f'(x)F(x,\cdot)dx\right)dP & = & \int_{0}^{\infty}f'(x)\left[\int_{A}F(x,\omega)dP(\omega)\right]dx\\ & = & \int_{0}^{\infty}f'(x)\left[\int_{A}1_{[X>x]}(\omega)dP(\omega)\right]dx\\ & = & \int\left[\int_{0}^{\infty}f'(x)1_{A}(\omega)1_{[X>x]}(\omega)dx\right]dP(\omega)\\ & = & \int\left[\int_{0}^{X(\omega)}1_{A}(\omega)f'(x)dx\right]dP(\omega)\\ & = & \int1_{A}(\omega)\left(f(X(\omega))-f(0)\right)dP(\omega)\\ & = & \int_{A}f(X)dP. \end{eqnarray*} In the above, we have used the fact that $(x,\omega)\mapsto1_{[X>x]}(\omega)$ is $\mathcal{B}([0,\infty))\otimes\mathcal{F}$-measurable, which can be proved as follow: Let $B=\{(x,\omega)\in[0,\infty)\times\Omega\mid X(\omega)>x\}$, then \begin{eqnarray*} B & = & \cup_{r\in\mathbb{Q},r>0}\{(x,\omega)\mid X(\omega)>r>x\}\\ & = & \cup_{r\in\mathbb{Q},r>0}\{(x,\omega)\mid X(\omega)>r\}\cap\{(x,\omega)\mid r>x\},\\ & = & \cup_{r\in\mathbb{Q},r>0}\left(X^{-1}((r,\infty))\times\Omega\right)\cap\left([0,r)\times\Omega\right) \end{eqnarray*} which is clearly jointly measurable. Now, $(x,\omega)\mapsto1_{[X>x]}(\omega)$ is simply the function $(x,\omega)\mapsto1_{B}(x,\omega)$.