Does it make sense to evaluate pointwise a function $f:\mathbb R\to \mathbb R$ that is defined almost everywhere ? For example, if $f(x)=x$ a.e. does it really make sense to say that $f(0)=0$ or $f(1)=1$ or $f(y)=y$ for any fixed $y$ ? Because in fact if I fix a number $y$, then it self, $\{y\}$ has measure $0$, and thus $f(y)$ can be any thing.
May be in functions defined everywhere, we are more interested on the proportion of $x$ s.t. $g(x)=0$ and not at the value of $g(x)$ for a fixed $x$ ?
I'm adding this answer in reference to your other question about random variables.
What I was trying to convey in my answer to your other question was an answer to the fundamental question, "How do you formalize randomness?" Most of what I was trying to impart was perspectives rather than definitions. I think, based on this question, that you have taken my other answer further than I intended.
I'll back up and reiterate something I said at the top of my other post: random variables are just functions, and they behave like every other function you have ever encountered. It absolutely makes sense, conceptually, to evaluate them at a point. Moreover, it is not true that they are only defined up to a.e. equivalence. This is the real answer to your question here, and it was well-covered by the other answers in this thread.
Where I think you're getting confused is with a probabilist's perspective. If you tell me to consider a random variable that is uniformly distributed on $[0,1]$, then there are many, many choices of $\Omega$ and maps $X$ that will work equally well to accomplish this, as I referenced in my other answer. In the background, though, I should actually commit to one of those choices and use it for any preceding calculations.
You can tell that the particular $\Omega$ and $X$ can't actually be that important, because you do indeed have some freedom to select them. But once you have done so, that function $X$ will behave exactly like every other function you have ever encountered. The function is not merely defined up to a.e. equivalence, because that's not how functions work. But insofar as a probabilist would care about it or do anything with it, different choices for $\Omega$ and $X$ will turn out to be equivalent. Evaluating $X$ at any particular $\omega$ is generally not meaningful, even though it is perfectly well-defined, because the measure of any particular $\omega$ is 0.
By the way, this is all intended as a way to start thinking of the formal concept of randomness. You can't end there, though. The beginnings of measure-theoretic probability deal a lot with convergence theorems, and those cannot be proved, interpreted, or even understood without a solid foundation in understanding random variables as genuine, ordinary functions for which point evaluation is a perfectly sensible (but perhaps questionably useful) concept.
It is not true that random variables are functions defined only up to a.e. equivalence. Yet it is often (but not always) true in practice that a.e. equivalence is good enough for the things a probabilist would actually want to consider.