The usual definition of a random variable (or random element) is that of a measurable function $X : (\Omega, \mathcal{F}, P) \rightarrow (\Omega', \mathcal{F}')$. Now I am not aware of any property/theorem that depends on the specific values of $X$ for every $\omega \in \Omega$. In particular any other $P$-almost surely equal random variable $X'$ is generally considered as equivalent to $X$ for all practical purposes.
So is there a good reason not to define random variables as equivalent classes rather than laboriously precising each time that such or such statement is true almost surely, that such or such sequence converges almost surely, that such or such object is unique almost surely, etc ? As a comparison defining $L^p$ spaces as spaces of equivalent classes of almost everywhere equal functions helps a lot in simplifying the phrasing of the theory.
So are there some interesting/complex cases where we would really need to keep the distinction between almost surely equal random variables?
Edit
In agreement with @Pedro Tamaroff's comment I'm removing the last addendum to this question and opening a new one.
One of the first classes of examples coming to mind where this matters concerns the almost sure properties of realizations of random processes indexed by uncountable sets, say the almost sure Hölder continuity of the paths of Brownian motion $(B_t)$. If one allows to modify each random variable $B_t$ on a null set, the resulting paths $t\mapsto B_t(\omega)$ may become ugly for every $\omega$ in an event of positive probability.
Edit: Regarding "ugly" above, user @tomasz mentioned a useful point in a comment below, which I now reproduce: if one allows to modify each random variable on a null set, the supremum of an arbitrary (uncountable) family of measurable functions need not be measurable, not even if the functions are almost everywhere zero (say, indicators of points).