Measure Theoretic Probability - Inequalities involving multiple random variables meaning

151 Views Asked by At

I'm a little bit unclear on what certain inequalities of random variables refer to. For instance, if we have random variables $X,Y$ defined on the same probability space $(\Omega,\mathcal{F},\mathbb{P})$, then if we consider $\mathbb{P}(X<Y)$ does that mean: $$ \mathbb{P}(\{\omega \in \Omega: X(\omega) < Y(\omega)\}) $$ or do we have to treat it using a finite product space, i.e. $$ \mathbb{P}(\{(\omega_1,\omega_2)\in\Omega^2:X(\omega_1)<Y(\omega_2)\}) $$ Can either of these be valid potentially depending on the context? Thanks!

1

There are 1 best solutions below

2
On BEST ANSWER

The first interpretation is correct. When a working probabilist sees an expression like $\mathbb P( X<Y)$ she would think of the set of $\omega$ cut out by the inequality $X(\omega)<Y(\omega)$, or equivalently, the set of $\omega$ cut out by $Z(\omega)>0$, where $Z=Y-X$ is a new random variable (that is, a new function of $\omega$), and so on.

Let me ask you, if you are asked "describe the set where $\sin < \cos$'' wouldn't you naturally translate that to "describe the set of $\theta$ such that $\sin\theta< \cos \theta$" and not "describe the set of $\theta_1$ and $\theta_2$ such that $\sin\theta_1< \cos \theta_2$"?

Of course, "two-$\omega$" scenarios like your second example come up from time to time, but always with an explanation of the special situation & notation involved. For instance, one can construct two iid uniform rvs as functions on $\Omega=[0,1]$ by in effect constructing a Lebesgue measure-preserving map between $[0,1]$ and $[0,1]^2$ by identifying $\omega\in[0,1]$ with its binary expansion $\omega=\sum_k b_k 2^{-k}$, (for $b_k\in\{0,1\}$) and taking the even and odd bits of the expansion for form $\omega_0 = \sum_k b_{2k} 2^{-k}$ and $\omega_1=\sum_k b_{2k+1} 2^{-k}$. This identifies $[0,1]$ with $[0,1]^2$, except on the null-set of points with more than one valid binary expansion. Then $X(\omega)=\sin(\omega_1)$ and $Y(\omega)=\sin(\omega_2)$ are iid on the same underlying $\Omega$. They are independent because they look at disjoint sets of the underlying bits $b_k$.

This motivates using a space like $\{0,1\}^{\mathbb N}$ as the underlying $\Omega$: you can use it to define a countable number of independent bit streams, which you can use to build up a countable number of iid uniforms on $[0,1]$, which you can use up to built up a countable number of independent rvs, and so on. This $\Omega$ is the probabilist's default Lego kit.