Is ergodic theory used in numerical simulations?
The kind of application I have in mind is: for $\alpha$ irrational, $( n\alpha \mod 1)_{n \geq 0}$ is equi-distributed on $[0,1]$, and I imagine that this fact could be used to simulate a random variable on $[0,1]$ with uniform distribution.
I imagine that more sophisticated uses of ergodic theory could replace Monte Carlo simulations, for example.
Various methods of getting pseudorandom numbers from irrationals have been tried (mainly pi and e). Surprisingly, there is controversy whether some of these methods are true enough to random observations from UNIF(0,1) to be useful (you can google that). And as already commented, dealing with rationals has so far proved to be very slow compared with alternative methods.
There is a vast literature on various methods of generating pseudorandom numbers on a computer, starting with von Neumann, etc. in the 1950s. Current pseudorandom number generators work very fast and are considered to be of high quality.
The default PRNG in R is the Mersenne-Twister, which repeats results only after 2^19937 - 1, has passed an impressive battery of tests for randomness, and has been shown to be equidistributed in 623 dimensions. In my experience it works very well. Even on an old computer I can access many thousands of pseudorandom numbers a second.
However, with ever faster computers doing ever more complex simulations the search for faster PRNGs with longer periods continues. There will be no end to the 'greediness' of applied statisticians and probabilists for better and faster PRNGs. So it is certainly worthwhile thinking about possible improvements.
My personal guess is that if quantum computers ever become standard, it may be possible to access truly random numbers rapidly enough for complicated simulations.
It is true, as commented, that MCMC can be slow for intricate simulations (based on Markov chains), but such simulations are far more complicated than just getting pseudorandom numbers in the unit interval.