So assume we have a probability space $(\Omega, \mathcal{F}, P)$ and a random variable $X : \Omega \rightarrow \mathbb{R}^*$. We can derive from this a distribution $P_X$, and a distribution function $F$. My book says that when working with distribution function directly we have no need to know the underlying probability space since everything is well determined. We just need to show that we can find one. So here is my attempt :
- Here we start with the distribution function $F$. increasing and right continuous, st. $F(\infty) = 1$ and $F(-\infty) = 0$. We define :
\begin{align} P_I : I &\longmapsto [0,1] \quad I \in \mathbb{R}^2 ( interval) \\ [a,b] &\longmapsto F(b) - F(a) \end{align}
- We can define $P_X$ on the borel sets by decomposing the set into disjoint intervals :
\begin{align} P_X : \mathcal{B}(\mathbb{R}^*) &\longmapsto [0,1] \quad B = \dot \bigcup B_i \quad B_i \in \mathbb{R}^2 \\ B &\longmapsto \sum_i P_I[a_i,b_i] \end{align}
- By taking $X$ as the identity function we have $\Omega = \mathbb{R}^*$ and $P = (P_X)_{\Omega}$
I suspect that I can use the fact that F is increasing and right continuous to say that we can find a measure $\mu$ s.t $\mu(a,b) = F(b)-F(a)$ and then use this measure as probability measure. But I'm scared to miss any vital step along this way.
I would like to know if there is any flaw in my reasoning. Or anything missing. Thanks for any input !