Currently, the largest prime know is a mersenne, $2^{82,589,933} − 1$. That’s an $82,589,933$-bit number if I am correct. Considering that RSA codes of as low as 1024 bits can be considered safe, how was this number factored to check if it is prime? I can kind of answer that question myself, I am aware of the existence of a special, much faster, prime check for Mersenne primes. But, given a non special number of similar size, would we even be able to check if it was prime? How long would it take? How fast are the fastest prime checking algorithms for non-special form numbers?
How do they check really large primes?
2k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 5 best solutions below
On
The answer is it wasn't factored to show it was a prime, a special Mersenne prime testing algorithm was used (GIMPS, who found your prime, uses the Lucas-Lehmer test, after checking small factors explicitly). And given a completely arbitrary number of the same size, checking its primality is a lot more work, and not really feasible with current technology as far as I'm aware.
On
After the famous 'primes is in P' paper, there are polynomial time algorithms for testing whether a number is prime. According to wikipedia the run time of these is $O(\log(n)^6)$ and while this is massively faster than actually factoring a number like $ 2^{82,589,933}−1$ the 6th power is still too big to make this feasible on modern computers. On the other hand, these algorithms should be sufficient to quickly check that the large numbers in RSA are not prime (which is not useful for breaking RSA, the algorithms is based on the fact that the number is product of two large prime factors).
On
Mersenne numbers are special as others have pointed out (the proof of primality is done with the Lucas-Lehmer Test). Fermat numbers are also special and can be proven prime with Pepin's Test. There are also well known primality tests to use for numbers of the form $k2^n±1$ when $k<2^n$ (Proth's Theorem for the plus side, and a related test for the minus side using Lucas Sequences). I recommend reading from here if you are interested in these tests.
RSA-primes on the other hand don't use deterministic primality tests like the ones above. Instead (in most cases), one uses probabilistic tests (they work well in practice, but cannot prove that a number is actually prime). Such tests include Fermat test, Miller-Rabin, Euler-Jacobi, BPSW, Frobenius, etc.
If provable primes are desired, it is possible to prove RSA-primes 'prime', but will come at a timely cost (300-700 digits roughly speaking). The quickest of methods that are used include APR-CL and ECPP. Still, these become impractical (timewise) once the size of the input is about 10k or 50k digits or so, thus there is no current (practical) method to prove such large numbers primes and we must either resort to probable primes (any number), or provable primes which rely on (some) factorization of usually either $n-1$, $n+1$ or both.
On
Apart from the arguments mentioned in the others' answers, one more thing should be said. The GIMPS is a distributed computing project. At the moment, its performance is over 1 000 TFLOPs per second. That's a huge computing power, comparable to large supercomputers. Except the Lucas-Lehmer test (LLT), they also perform factoring and probable prime testing. Only a small fraction of candidates are subject to LLT. Even so, for the vast majority of composite numbers, their factorization is unknown. This is the main reason why they can check such large numbers.
In practice (when encrypting, for example) we use probabilistic prime testing algorithms like the Miller Rabin test whose probability of failing falls off exponentially with the running time. This means we can be fairly certain a number is prime or not if we run it for a reasonable amount of time.
On the other hand, as you mentioned, primes of record breaking sizes are usually generating from special families which have ad hoc primality tests (like Mersenne primes)