Looking into the Miller-Rabin Primality Test.
At some point it is stated that if $b \approx \log_2(n) \ge 32$ then the probability of a number $n$ being prime after passing $k$ tests is: $4^{-k}$.
Now, the numbers below $2^k$ are, by definition, $2^k$ and, hence, the probability of getting any given number from that interval is: $2^{-k}$.
If both statements are correct, then, it looks to me that for any number above $n > 2^{32}$ it is sufficient to perform $k > \log_2(n)$ (actually even $k > \log_4(n) = \frac{\log_2(n)}{2}$) Miller-Rabin tests to ensure primality (for example using first $k$ primes).
However, the Miller test (which would be deterministic) is a lot stricter than this, requiring to test all numbers below $2 \log^2(n)$ (actually below $\min(n - 2, 2 \log^2(n))$) (relying on the -- unproven, as far as I know -- extended Riemann hypothesis), although it has been argued that this limit may be reduced to: $\frac{\log(n) \log(\log(n))}{\log(2)}$.
Could anybody please provide some clarification over this?
I have also the following correlated questions:
- are there any combination of the Miller-Rabin bases that are equivalent for any $n$?
- why is it common (but not the most effective) to pick up primes for constituting a base?
- is there any "improved orthogonality" for this choice?
- is there any work indicating that if numbers below $x$ are primes if they pass Miller-Rabin for the first $k$ primes, then numbers below $y$ are primes if they pass Miller-Rabin for the first $k + 1$ primes with $y > x$?
Any help is appreciated.