Constructing a probability space

506 Views Asked by At

I am trying to understand the proof that $$\mathbb Eg(X) = \int_\mathbb R g(x)d\mu_X(x),$$ where $X$ is a random variable on a probabiliby space $(\Omega, \mathcal F, \mathbb P)$. It starts with the case where $g(x)=I_B(x)$ is an indicator function of a Borel subset of $\mathbb R$. In this case we have to prove $$\mathbb EI_B(X) = \int_\mathbb R I_B(x)d\mu_X(x).$$

We have that $\mathbb EI_B(X) = 1\cdot \mathbb P\{X\in B\} + 0\cdot \mathbb P\{X\not\in B\} = \mathbb P\{ X\in B \}$ which equals to $\mu_X(B)$ by definition.

The next step is to prove that the integral on the right hand side is also equal to $\mu_X(B)$. It proceeds as: with $\Omega=\mathbb R$, $X=I_B$ and $\mathbb P=\mu_X$ the integral is $$\int_\mathbb R I_B(x)d\mu_X(x) = 1\cdot \mu_X\{x; I_B(x)=1\} + 0\cdot \mu_X\{x, I_B(x)=0\} = \mu_X(B).$$

I don't get the point of $\Omega=\mathbb R$, $X=I_B$ and $\mathbb P=\mu_X$. I guess we can set $\Omega$ to any space we want and in this case we choose $\mathbb R$. Why is $\mathbb P=\mu_X$? By definition $\mu_X(B) = \mathbb P\{\omega \in \Omega; X(\omega)\in B\}$ and $\mathbb P(B) = \mathbb P\{\omega\in\Omega; \omega \in B\}$, but I am still confused. Is it the case that $X(\omega) = \omega$?

2

There are 2 best solutions below

5
On BEST ANSWER

We start first with the standard probability space, $(\Omega, \mathscr{F}, \mathbb{P})$. Given a random variable on $\Omega$ (i.e a $(\Omega,\mathscr{F}) \to (\mathbb{R},\mathbb{B}(\mathbb{R}))$ -measurable function from $\Omega$ to $\mathbb{R}$), we define the expectation by: $$ \mathbb{E}[X] = \int_\Omega X(\omega) \mathrm{d}\mathbb{P}(\omega) $$ Now, given a random variable, we may turn $\mathbb{R}$ into a probability space, $(\mathbb{R}, \mathbb{B}(\mathbb{R}), \mathbb{P}_X)$. This set is a probability space, because we have that: $$ \mathbb{P}_X(\mathbb{R}) = \mathbb{P}(X\in \mathbb{R}) = \mathbb{P}(\Omega) = 1 $$ by the definition of the probability on the original space. Moreover, for any borel set, $\mathbb{P}_X$ is well-defined thanks to the measurability of $X$. Finally, let us consider "Random Variables" on this space, which encompasses the set of $(\mathbb{R},\mathbb{B}(\mathbb{R}), \mathbb{P}_X)\to (\mathbb{R}, \mathbb{B}(\mathbb{R}), \mu)$ measurable functions ($\mu$ here denotes the Lebesgue measure). The expectation operator on any $g$ in this set is given by: $$ \mathbb{E}[g] = \int_{\mathbb{R}} g(x) \mathrm{d}\mathbb{P}_X(x) $$ So, what have we done? We took our original probability space, and used the random variables on it to defined new probability spaces over $\mathbb{R}$, whose $\sigma$ algebra is all the Borel sets, and whose probability measure is given by $\mathbb{P}_X$. Hope this makes things a tad clearer. As an addendum, it seems about right to say that we have taken our measure on $\Omega$ and "pushed it forward" to a measure on $\mathbb{R}$, hence the usual name for this concept: the pushforward measure.

0
On

I wanted to add a possible interpretation/generalization for the statement you're proving, in hopes that it clarifies things.

A random variable in this context is a measurable map $X: (\Omega, \mathbb{P}) \rightarrow \mathbb{R}$.

As a general recipe, if you start with a measure space $(\Omega, \mu)$ and consider any (measurable) map $X$ from $\Omega$ to another measurable space $\Omega'$, we can use $X: \Omega \rightarrow \Omega'$ to push the measure $\mu$ on $\Omega$ over to a new measure $\mu_X$ on $\Omega'$. The pushforward $\mu_X$ is defined in the same way as your $\mu_X$, i.e. $\mu_X(B) = \mu(\omega \in \Omega : X(\omega) \in B)$. Note that $\mu_X$ is a probability measure if $\mu$ was. The statement you're trying to prove is the change-of-variables formula for the pushforward measure resulting from the random variable $X$.

Here's one interpretation for this change-of-variables formula: let's say we're working with an abstract random variable $X: (\Omega, \mathbb{P}) \rightarrow \mathbb{R}$ and we want to understand its distribution (average value, variance, etc.). If we take the abstract probability measure $\mathbb{P}$ and push it forward along $X$, we get a Borel probability measure $\mu_X$ on $\mathbb{R}$. The formula guarantees that $\int_\Omega g(X) d\mathbb{P} = \int_\mathbb{R} g(x) d\mu_X$. In particular, for computations on the abstract space $(\Omega, \mathbb{P})$ like $E(X)$ or $E(X^2)$, we can instead compute $\int x d\mu_X$, $\int x^2 d \mu_X$, etc., so that $\mu_X$ is the "right" measure to use on $\mathbb{R}$ to model $X$. In other words, if you're interested in studying the random variable $X$ on $(\Omega, \mathbb{P})$, it's enough to understand the identity function random variable $x: \mathbb{R} \rightarrow \mathbb{R}$ on $(\mathbb{R}, \mu_X)$, because the distribution of $X$ with respect to $\mathbb{P}$ is the same as the distribution of $x$ with respect to $\mu_X$. So we've safely reduced the study of abstract random variables to the study of the case where $\Omega = \mathbb{R}$, $X = x$, which might feel more concrete.