How can Maple software factor $10^{128}+1$ in a matter of seconds.

211 Views Asked by At

I recently used Maple software to factor $10^{128} + 1$ into primes. One of the factors had 116 digits. It did this in a matter of seconds. I also use Maple software to factor $10^{256} + 1$ into primes and after over an hour I shut the program down. My question is how did Maple software factor a number of 129 digits so quickly. I have written computer programs to search for prime pairs and I taught a math course using computers where I had the students compute pi to two thousand decimal places. Yet I have no idea of how Maple computed this so quickly. Does anyone have a clue?

2

There are 2 best solutions below

0
On

Uh, not sure about the comments. The truth is that neither wolframalpha, maple, nor mathematica knows the factorization of $10^{128}+1$ with absolute certainty. But they are 99.9999999...9999...9999% sure they have the correct factorization.

In general, it is far far easier to "test" if a number is prime than it is to "know" if a number is absolutely certain to be prime. This is easy to do, for example, in python $pow(A,p-1,p)==1$ will be true if $p$ is a prime and $A$ is any number relatively prime to $p$, this is Fermat's theorem. Are there composites that also work with some $A$ values? Yes, which is why picking a few random $A$'s is not sufficient to guarantee $p$ is a prime. Hence, why this trick is called a Fermat test.

Most computers use something like the Rabin-Miller test a few hundred times. This test is so computationally cheap we are only limited by the size of our memory, not the size of the numbers. We can easily calculate the Rabin-Miller test on million digit numbers for example. A few hundred Rabin-Miller tests does NOT guarantee a number is prime, but it might as well be with an absolutely absurd probability of not being prime at around $1/4^{100}$ or so. Another example is the Baillie test; another probabilistic test that is so efficient, we are not actual sure it is a probabilistic test. Literally no one has found a counterexample yet.

In general, factoring out lots of small primes with trial division is trivial, likewise, testing if a large number is prime (probably prime) is also trivial. Hence, it is no surprise that $10^{128}+1$ factors trivially in any software available. It is when you have two very large primes (100+ digits) multiplied together that we have a serious issue. Even something like a 30 digit prime times a 60 digit prime is a non-trivial task. In such case Rabin-Miller tests will fail immediately (which obviously they would since it is composite), hence we can know with certainty the product is composite, but we have no known fast method of finding it's factors.

If your computer freezes, it is because it has reached a large composite that it knows is composite, but is stuck trying to find the primes. This is when you need things like the Quadratic Sieve or other large computational super computer algorithms.

1
On

I recently used Maple software to factor $10^{128}+1$ into primes. One of the factors had 116 digits. It did this in a matter of seconds.

Just recall how factorization algorithms usually work, i.e. no special form of the number to be factored is assumed. Actually, it is not a single algorithm that's being used, but a sequence of different approaches, where approaches that find "small" (prime) factors with higher probability are run first.

As $10^{128}+1$ has only small factors except one big prime factor, the algorithms can terminate the complete factorization fast.

For example, an algorithm would apply the following steps:

  1. Use trial division to find small factors. Trial division is easy to implement, it can decrease the size of the problem, and some of the methods below won't work when small factors are present.

  2. Use Pollard's $p-1$, William's $p+1$, Pollard's $\rho$ to find small factors.

  3. Use Elliptic Curve Method to find small factors.

  4. Use the Quadratic Sieve to factor the number.

  5. Use the Number Field Sieve to factor the number.

Any of the steps are optional, and the list of methods is by no means complete. Notice that the first algorithms are special-case algorithms that work the better the smaller the (prime) factors are.

In cases like ECM, you can change the elliptic curve and have multiple tries if the order of the curve is not smooth enough.

Now observe that $10^{128}+1$ has only relatively small prime factors (all up to 6 digits) and just one big prime factor of 116 digits. So this number is relatively easy to factor. For example, GMP-ECM finds the factorization with only 1 curve and in 0.2 seconds:

Time elapsed: 0d 0h 0m 0.2s
Timings:
   • Factoring 1 number using ECM: 0d 0h 0m 0.2s 

Maple is not GMP-ECM (which is a JavaScript / WebAssembly version of GMP + ECM), but any reasonable factorization algorithm should be able to find such factorizations without invoking sieve algorithms that don't benefit from smooth numbers. And of course, an algorithm should not keep using a method like ECM ad infinitum if it turned out to yield no factors after a "reasonable" running time.

Maple's ifactor method takes a parameter that selects the factorization method like

   'pollard' - Pollard's rho method;
   'lenstra' - Lenstra's elliptic curve method; 

However the art of factoring (efficiently) is to a great deal to have a good heuristic that tells when to abandon some factorization method and to switch to the next one. At least the Alpertron page linked above does a good job factorizing $10^{256}+1$:

Time elapsed: 0d 0h 0m 0.9s
Timings:
    • Factoring 3 numbers using ECM: 0d 0h 0m 0.7s 

So it's very likely, that the Maple sub-algorithms are not used in a way that's anywhere near to optimal. FYI the factorization is:

$$10^{256}+1 = 10753 \cdot 8253953 \cdot 9524994049 \cdot 73171503617 \cdot p_{225}$$ where $p_{225}$ is a prime with 225 decimals. This means this number also falls in the category of "large prime times small factors", whereas the hard case is "product of two primes of comparable magnitude".