I have been reading about the Miller-Rabin primality test. So far I think I got it except the part where the accuracy is stated.
E.g from wiki
The error made by the primality test is measured by the probability for a composite number to be declared probably prime. The more bases a are tried, the better the accuracy of the test. It can be shown that if n is composite, then at most 1⁄4 of the bases a are strong liars for n. As a consequence, if n is composite then running k iterations of the Miller–Rabin test will declare n probably prime with a probability at most $4^{−k}$.
So if I understand correctly if we have a large number $N$ and if we have $k$ random witnesses then if none of them observes the non-primality of $N$, then the probability that $N$ is not a prime is $1$ in $4^k$
What I am not clear is where does this $\frac{1}{4}$ come from.
I understand we have $4$ conditions to be met (in order) i.e.:
- $a \not\equiv 0 \mod N$
- $a^{N-1} \not\equiv 1 \mod N$
- $x^2 \equiv 1 \mod N$
- $x \equiv \pm 1 \mod N$
The process is the following:
In the above $a$ is the witness. We first check condition (1).
If that passes we check condition (2).
Do do that we start multiplying $a, a \cdot a, a\cdot a\cdot a ....$ until we calculate $a^{N-1}$.
Do do that efficiently we can use the squaring method. If in the process of the multiplication during squaring we encounter a number e.g. $x$ such that the $x^2 \equiv 1$ but $x \not\equiv 1$ and $x \not\equiv -1$. (E.g $19^2 \equiv 1 \pmod {40}$ but $19 \not \equiv 1 \pmod {40}$ and $19 \not \equiv -1 \pmod {40}$) then the conditions (3) and (4) fails otherwise we proceed multiplying.
We check the final product for condition (2)
Does the $1/4$ mean that at most $1$ of these can be indicate a prime? If so, how is that validated?
A more rigorous statement is as follows.
Suppose we want to consider whether $n$ is prime. Let $W_n(b)$ be the statement that
If $W_n(b)$, then we say that $b$ is a witness to the compositeness of $n$. Then Rabin proved that no more than $1/4$ of the numbers $1 \leq b < n$ are not witnesses of compositeness. (This is Theorem 1 of his paper, full citation below).
Thus if you independently randomly choose $k$ distinct bases mod $n$, then each will pass Miller's test with probability bounded above by $1/4$. This is where the $1/4^k$ comes from.
I'll note that typically, the number is actually much less than $1/4$.
References
Rabin, Michael O. (1980), "Probabilistic algorithm for testing primality", Journal of Number Theory, 12 (1): 128–138,