Why not defining random variables as equivalence classes?

1.1k Views Asked by At

The usual definition of a random variable (or random element) is that of a measurable function $X : (\Omega, \mathcal{F}, P) \rightarrow (\Omega', \mathcal{F}')$. Now I am not aware of any property/theorem that depends on the specific values of $X$ for every $\omega \in \Omega$. In particular any other $P$-almost surely equal random variable $X'$ is generally considered as equivalent to $X$ for all practical purposes.

So is there a good reason not to define random variables as equivalent classes rather than laboriously precising each time that such or such statement is true almost surely, that such or such sequence converges almost surely, that such or such object is unique almost surely, etc ? As a comparison defining $L^p$ spaces as spaces of equivalent classes of almost everywhere equal functions helps a lot in simplifying the phrasing of the theory.

So are there some interesting/complex cases where we would really need to keep the distinction between almost surely equal random variables?

Edit

In agreement with @Pedro Tamaroff's comment I'm removing the last addendum to this question and opening a new one.

2

There are 2 best solutions below

17
On BEST ANSWER

One of the first classes of examples coming to mind where this matters concerns the almost sure properties of realizations of random processes indexed by uncountable sets, say the almost sure Hölder continuity of the paths of Brownian motion $(B_t)$. If one allows to modify each random variable $B_t$ on a null set, the resulting paths $t\mapsto B_t(\omega)$ may become ugly for every $\omega$ in an event of positive probability.

Edit: Regarding "ugly" above, user @tomasz mentioned a useful point in a comment below, which I now reproduce: if one allows to modify each random variable on a null set, the supremum of an arbitrary (uncountable) family of measurable functions need not be measurable, not even if the functions are almost everywhere zero (say, indicators of points).

1
On

Although every finite moment of a variate $X$ will be equal to the corresponding moment of the two "equivalent" variate $X^\star$, one might consider the range of a distribution as an interesting property. If you consider the range to be interesting, then consider the following two distributions, both derived from an underlying uniform random $U$ on $[0,1]$: $$ X: \begin{array}{lc} X = & \left\{ \begin{array}{cl} U & U \mbox{ irrational}\\ -U & U \mbox{ rational} \end{array}\right.\\ X^\star = &U \end{array} $$ The variate $X$ is almost surely equal to $X^\star$ but the range of $X$ is $[-1,1)$ whilst the range of $X^\star$ is $(0,1)$.