I'm not a mathematician by trade so I apologise if this is a dim question. I realise that no such algorithm currently exists but I'm trying to understand what such a polynomial time prime factorization algorithm would look like. My current understanding is that such an algorithm would take less than twice as long to factorise a number twice as large, and instead scale linearly in relation to the bit length of the input. So, for example, an algorithm that performed a single simple operation on every number less than $n(x)$ will not fulfill this criteria, even if $x$ is a very small fraction, nor would one which only needed to perform the cube root of $n$ operations. Am I on the right track?
What would a polynomial time algorithm for prime factorization do?
3.1k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 3 best solutions below
On
What you described is a linear time algorithm. That is already rather fast (depending on the problem). In a polynomial time algorithm, the time needed for computations grows polynomial in the size of the input. For example, with running time $\mathcal{O}(x^2)$, we would get at most four times the running time when doubling the input size, nine times the time when taking three times the size, etc.
You are right, a linear time algorithm, i.e. $\mathcal{O}(x)$, for factorization is rather unlikely. But maybe there is an algorithm that takes, say, $\mathcal{O}(x^{100})$. This algorithm wouldn't really be applicable in practice on current computers, but it would be a polynomial time algorithm and thus of great theoretic interest.
On
There are several layers of possible run-time.
An exponential-time algorithm would be one which takes $x$ times as long to factorise a number which is twice as large, for some constant $x>1$. You mention $x=2$, but in fact we should expect something much smaller than this - even trial division only multiplies by $x=\sqrt 2$, because maximum number of trials you need to factorise $n$ is about $\sqrt n$ (if it factorises, one of the factors must be smaller than the square root). And yes, your cube-root example would also be exponential, with $x=\sqrt[3] 2$. Exponential-time algorithms get very slow very quickly.
A polynomial time algorithm runs in time which is a polynomial function of the number of bits (or equivalently digits), not necessarily linear. Even time $b^{100}$ is polynomial - and will be much faster than $x^b$ for sufficiently large numbers.
Actually, the fastest known factorisation algorithms are in between the two - the actual running time is described by an unusual function which will be much slower than polynomial but much faster than exponential. An example of such a function is $2^{\sqrt{b}}$ (the actual function is rather more complicated than this).
The reason that this is important is that computer security depends on the fact that it is relatively easy to find two large prime numbers and multiply them together, but (so far as we know) hard for anyone to reverse this process. But this has to be true even if your enemy who is trying to reverse the process has a much faster computer than you do, or even lots of much faster computers and much more time to spend, because we want to be able to make secure transactions on small devices which are secure against well-organised attacks. In order for codes which are easy to make on a small computer to be difficult to break even with huge amounts of resources, we need algorithms for breaking in to get hard much faster - super-polynomial versus polynomial.
Major cryptographic systems found their resilience on the difficulty to factor large numbers fast enough.
The running-time of non-polynomial algorithms is growing too fast to allow such a computation in reasonable time with the current key lengths. (Even with moderate key lengths, the factorization time can exceed the life expectancy of the universe.)
The growth is not so explosive with polynomial behaviors. Anyway, high degrees are also problematic.
$$\begin{matrix}n&n^2&n^{10}&2^n\\ 1&1&1&2\\ 2&4&1024&4\\ 3&9&59049&8\\10&100&10000000000&1024\\ 100&10000&100000000000000000000&1267650600228229401496703205376\end{matrix}$$
Some researchers claim that this limit can be broken by the use of so-called "quantum computers". These do not work by improving the algorithm complexity, but rather by providing a "exponential" computing power.