Let $(\Omega, \mathscr F, P)$ be a probability space and let $X, Y : \Omega \to \mathbb R$ be iid standard Gaussian RVs. By definition this means that $$\newcommand{\d}{\,\text d} P_X(A) := (P \circ X^{-1})(A) = \int_A \frac{1}{\sqrt{2\pi}}e^{-x^2/2} \d\lambda(x) $$ for every Borel set $A$ and analogously for $P_Y$.
My question: what do I know about the actual functions $X$ and $Y$? Can I ever know something like $X(1)$ (assuming $1 \in \Omega$)? How different can two functions be to still have the same distribution?
Let's say I have $\Omega = [0,1]$ so $\mathscr F = \mathfrak B_{[0,1]}$ (Borel $\sigma$-algebra) and $P = \lambda$ (the Lebesgue measure). What's an example of a function $X:[0,1] \to \mathbb R$ such that $X$ has a Gaussian distribution? And how different could a second function $Y : [0,1] \to \mathbb R$ be such that it too has a Gaussian distribution? E.g. do they only differ on a set of measure $0$, or can they be arbitrarily different? Or do I need to have $\Omega$ unbounded in order to have a Gaussian RV in the first place?
Thanks for any help, and I'm happy to add details if this is too vague. Part of my problem is that I'm struggling to even specify what exactly I'm uncertain about, but I know it's something to do with evaluating RVs in a pointwise fashion versus only ever caring about their measures.
Update
After reading @MichaelHardy's comments I've realized that I don't actually care about independence. What I'm really trying to get after is how different two RVs can be, as functions, while still having the same distribtuion. Do the actual values $X(\omega)$ and $Y(\omega)$ even matter? Here I'm using a common probability space, but in general they could have absolutely nothing in common, right? Like I could have $\Omega_X = [0,1]$ while $\Omega_Y = \mathbb C$ or something like that, so as a function $X$ really looks nothing like $Y$, the only connection is that the "proportion" of their domains mapped to intervals in $\mathbb R$ are the same?
Possibly the simplest function $X:[0,1] \to \mathbb R$ for which $X\sim N(0,1)$ when the probability distribution on $[0,1]$ is the Lebesgue measure is $X=\Phi^{-1}$ where $$ \Phi(x) = \frac 1 {\sqrt{2\pi}} \int_{-\infty}^x e^{-u^2/2} \, du. $$ If you want $X,Y:\Omega\to\mathbb R$ and $X,Y\sim\operatorname{i.i.d.} N(0,1),$ then I might use $\Omega= [0,1]\times[0,1]$ and $X(\omega) = \Phi^{-1}(x)$ and $Y= \Phi^{-1}(y)$ where $\omega=(x,y).$ It is possible to do this with $\Omega= [0,1]$ but I think it may need to be messy. For example, one can use $w\mapsto(x,y)$ where $w\in[0,1]$ and the decimal expansion of $x$ contains the even-numbered digits of $w$ and that of $y$ the odd.