Given a random variable $U \sim Unif(0,1)$ we can set $X = F^{-1}(U)$ and conclude that $X$ has (assumed continuous) cdf $F$. Indeed, $$ P(X \leq x) = P(F^{-1}(U) \leq x) = P(U \leq F(x)) = F(x). $$
Note, in particular, that in the last equality we relied on the uniform distribution of $U$. But, when we use a random number generator on a computer, it gives values in $[0,1]$ but does do deterministically; i.e., these values don't necessarily have the $Unif(0,1)$ distribution. Even worse, quasi-random generators like the Halton sequence fill up the unit interval in a very specific order; points from these sequences are certainty not uniformly distributed.
So, why is it that we may still use the inverse transformation method on quasi-random numbers? How does this guarantee a sample from cdf $F$?
The equation you give, taken by itself, has nothing to do with simulation. Your real questions involve doubts about the behavior of the pseudo-random number generator.
When using PRNs from a generator, one assumes that for practical purposes, they are not distinguishable from a sequence of independent observations from $Unif(0,1).$
Of course generators vary in quality, but recent ones work very well. The default PRN generator in R statistical software is the 'Mersenne twister', which produces a very large (Mersenne prime) number of distinct values before the sequence repeats. It has been vetted by subjecting it to a large battery of problems that are known to have caused difficulties for previous generators. And there are some other modern PRN generators that have good track records of working well in simulations.
In the 1950's John von Neumann became frustrated with his early attempts to generate PRNs with computer algorithms, and famously said that anyone doing this was "living in a state of sin." So you are not the first person to have such doubts.
Your assumption that the random variable $X$ in your equation must be continuous is not exactly correct. If the quantile function (inverse CDF) of a discrete random variable is properly defined, the equation still holds.
For example, there are better ways to simulate independent observations from $Binom(n = 10, p = 1/2)$ than by using the quantile function, but here is how that kind of simulation looks for 100,000 observations. In R, the relevant quantile function is
qbinom(u, n, p). Valuesuare from the Mersenne twister PRN generator.The figure below shows a histogram of the simulated distribution of $X \sim Binom(10, .5);$ dots atop histogram bars show exact binomial probabilities.
Note: There is a long history of increasingly successful attempts to generate standard normal distributions. (The Box-Muller method is one of them.) Even though the normal CDF cannot be written in closed form, piecewise 'rational function approximations' can come very close. Wichura has essentially inverted one of these to get an approximation to the standard normal quantile function that is about as accurate as can be represented in double-precision arithmetic. So the current standard for algorithmic generation of random samples from $Norm(0,1)$ seems to be to use the formula you show along with Wichura's quantile function. This is the default method in R: