Confused when changing from Lebesgue Integral to Riemann Integral

888 Views Asked by At

I'm currently studying Stochastic Calculus via Shreve II. I have a question about switching back and forth between Lebesgue and Riemann Integral.

Suppose we have a non-negative random variable $X$ defined on a probability space $(\Omega, F, P)$ with exponential distribution:$$P(X<x) = 1-e^{-\lambda x}$$

Written in Lebesgue Integral, the expected value of $X$ can be written as: $$E[X] = \int_{\{{\omega \mid X(\omega) \geq 0}\}}^{ }X(\omega)dP(\omega)$$

Question: How exactly do we switch from $\omega$ in the Lebesgue Integral to $x$ in Riemann integral so that we get $$E[X] = \int_{0}^{\infty}x\lambda e^{-\lambda x}dx$$

Does this have to do with the fact that we should define our $\Omega$ to be the Borel $\sigma$-algebra $B(\mathbb{R})$ and simply define $X(\omega) = \omega$ for non-negative $\omega$'s?

Any help is greatly appreciated!

2

There are 2 best solutions below

1
On BEST ANSWER

It is not really a matter of changing from Lebesgue integral to Riemann integral, it is a matter of changing measures (sort of a change of variables).

By definition, given $X : \Omega \to \mathbb{R}$ a random variable, $E[X]=\int_{\Omega}X$.

$X$ defines a measure $\widetilde{m}$ in $\mathbb{R}$, called the push-forward, by $\widetilde{m}(A)=P(X^{-1}(A)).$ By definition, this measure is invariant under $X$, and hence \begin{equation} \tag{1} \int_{\mathbb{R}} f d\widetilde{m}= \int_{\Omega}f \circ XdP. \end{equation} The equality follows from the usual arguments (prove for characteristics, simple functions, then use convergence. Recall that $\mathbf{1}_A \circ X=\mathbf{1}_{X^{-1}(A)}$).

Let $h$ be the density of $X$. We then have, by definition of density, that $\widetilde{m}(A)=P(X^{-1}(A))=\int_A h dm$ for any $A \in \mathcal{B}(\mathbb{R})$, where $m$ is the Lebesgue measure. By "change of variables" (which is Theorem $1.29$ in Rudin's RCA for example, but is just another instance of repeating the argument of proving for characteristics, simple functions and using convergence), we have: \begin{equation} \tag{2} \int_{\mathbb{R}} f d\widetilde{m}=\int_{\mathbb{R}}f \cdot h dm. \end{equation} Combining $(1)$ and $(2)$, $$\int_{\mathbb{R}} f \cdot h dm= \int_{\Omega}f \circ X dP. $$ Taking $f=\mathrm{Id}$ yields $$\int_{\mathbb{R}} x h(x)dx= \int_{\Omega} X dP=E[X]. $$ Taking $f=\mathrm{Id}\cdot\mathbf{1}_I$, where $I$ is some interval (for example, $(0,+\infty)$ as in your case), we have $$\int_{I} x h(x)dx= \int_{X^{-1}(I)} X dP, $$ recalling again that $\mathbf{1}_A \circ X=\mathbf{1}_{X^{-1}(A)}$. Since $P(X<0)$ in your case is $0$, this last integral is actually equal to the integral over the whole space, and hence to $E[X]$, which gives your equality.

0
On

By definition as a random variable, $X: \Omega \to \mathbb{R}$ is measurable with a distribution function

$$F(x) = P[X < x] = 1- e^{-\lambda x}$$

We can construct a non-decreasing sequence of approximating step functions $(\phi_n)$ converging pointwise to $X$ and of the form

$$\phi_n = \sum_{j=1}^{m(n)}x_{j-1}^{(n)}\mathbf{1_{A_j^{(n)}}}$$

where $A_j = \{\omega: x_{j-1}^{(n)} \leqslant X(\omega) < x_j^{(n)} \}$ and $[0,\infty) = \bigcup_{j=1}^\infty[x_{j-1}^{(n)},x_j^{(n)})$ with $m(n) \to \infty$ and $x_j^{(n)}- x_{j-1}^{(n)} \to 0$ as $n \to \infty$.

By the monotone convergence theorem,

$$E[X] = \lim_{n \to \infty}E[\phi_n] = \lim_{n \to \infty}\sum_{j=1}^{m(n)} x_{j-1}^{(n)}P(A_j^{(n)}) = \lim_{n \to \infty}\sum_{j=1}^{m(n)} x_{j-1}^{(n)}[F(x_j^{(n)}- F(x_j^{(n)}] $$

Recognizing the limit on the RHS as that of a Riemann-Stieltjes sum we get

$$E[X] = \int_0^\infty x \,dF(x) = \int_0^\infty xF'(x) \, dx = \int_0^\infty x \lambda e^{-\lambda x} \, dx$$