Suppose I have some random variable $X$ defined on some probability space $(\Omega, \mathcal{F}, \mathbb{P})$. What does it mean, in measure theoretic terms, to draw a sample from $X$?
When $\Omega$ is finite, things make sense: we might say nature rolls its dice and draws an $\omega \in \Omega$ according to $\mathbb{P}$, and our sample is simply $X(\omega)$.
In uncountably infinite domains its not clear to me how this is defined. As we all know, every element $\omega \in \Omega$ has zero measure, only measurable subsets $A \subseteq \Omega$ have measure. But sampling implies getting elements of $\Omega$. How does one reconcile the idea of drawing a sample $x$ from some distribution with the fact that $P(X=x) = 0$?
And yet, I can call the rand function in my favourite programming language and it will sample from a continuous distribution. But of course it is not really continuous, it is a floating point approximation. Could it be that the discretisation is required to have a well-defined notion of sampling?
The simple answer is that in measure theory, we will talk about distributions of draws and not work with the individual draws themselves. There are ways to define uncountably many draws from a continuous distribution sensibly, but they are highly nontrivial.
Your favorite programming language avoids the problem by approximation.
Btw: It all depends on the measure, nothing prevents me from defining a probability measure on an uncountable domian that picks a certain element with probability $1$.