Does P / NP algorithm classification pre-date modern computers?

110 Views Asked by At

On the face if it, H. C. Pocklington referenced the P / NP distinction in the year 1910. In the Wikipedia article on ‘P’ we find the following passage:

“Kozen states that Cobham and Edmonds are "generally credited with the invention of the notion of polynomial time." Cobham invented the class as a robust way of characterizing efficient algorithms, leading to Cobham's thesis. However, H. C. Pocklington, in a 1910 paper, analyzed two algorithms for solving quadratic congruences, and observed that one took time "proportional to a power of the logarithm of the modulus" and contrasted this with one that took time proportional "to the modulus itself or its square root", thus explicitly drawing a distinction between an algorithm that ran in polynomial time versus one that did not.”

Is this a typo? or is it perhaps a case of putting modern terminology into the mouth of a famous forerunner, as we do in the case of Lagrange’s Theorem?

2

There are 2 best solutions below

2
On

While one might reasonably argue that Pocklington considered the complexity class we now call P, certainly the notion of NP came significantly later.

This is not surprising, since it is quite plausible to come up with the concept of polynomial-time algorithms (scaling of run-time still being very important for human calculators) without having a proper theoretical basis for computer science, whereas that is not really the case for the concept of "solvable in polynomial time by a non-deterministic Turing machine". The equivalent definition for NP in terms of "yes" instances having certificates that can be efficiently checked is perhaps a little more intuitive, but still not something you'd expect to predate computers.

1
On

Like Especially Lime, I don't see any evidence that Pocklington or his contemporaries considered the class NP. However, one still might argue that the P vs. NP question predates modern computers, for the following reason: Godel, in a 1956 letter to Von Neumann, wrote (emphasis mine):

I would like to allow myself to write you about a mathematical problem, of which your opinion would very much interest me: One can obviously easily construct a Turing machine, which for every formula $F$ in first order predicate logic and every natural number $n$, allows one to decide if there is a proof of $F$ of length $n$ (length = number of symbols). Let $\psi(F,n)$ be the number of steps the machine requires for this and let $\varphi(n) = \max F \psi(F,n)$. The question is how fast $\varphi(n)$ grows for an optimal machine. One can show that $\varphi(n)\ge k \cdot n$. If there really were a machine with $\varphi(n) \sim k \cdot n$ (or even $\sim k \cdot n^2)$, this would have consequences of the greatest importance. Namely, it would obviously mean that in spite of the undecidability of the Entscheidungsproblem, the mental work of a mathematician concerning Yes-or-No questions could be completely replaced by a machine. After all, one would simply have to choose the natural number $n$ so large that when the machine does not deliver a result, it makes no sense to think more about the problem. Now it seems to me, however, to be completely within the realm of possibility that $\varphi(n)$ grows that slowly. Since it seems that $\varphi(n) ≥ k \cdot n$ is the only estimation which one can obtain by a generalization of the proof of the undecidability of the Entscheidungsproblem and after all $\varphi(n) \sim k \cdot n$ (or $\sim k \cdot n^2)$ only means that the number of steps as opposed to trial and error can be reduced from $N$ to $\log N$ (or $(\log N)2$). However, such strong reductions appear in other finite problems, for example in the computation of the quadratic residue symbol using repeated application of the law of reciprocity. It would be interesting to know, for instance, the situation concerning the determination of primality of a number and how strongly in general the number of steps in finite combinatorial problems can be reduced with respect to simple exhaustive search.

Now to the meat. Whether or not this constitutes a pre-modern-computing discovery of P vs. NP is a subjective question; I believe that it does not, although it is a very interesting near-miss, for the following reasons (and I welcome comments/criticism from those who understand the issue better than me - I'm at best an amateur historian or complexity theorist):

  • First, looking at the first bolded portion, Godel is aware of the possibility of looking beyond linear functions, but does not go beyond quadratic; although it's a plausible extrapolation to assume he implicitly referred to all polynomial-time functions, there isn't solid evidence in the text for this. Similarly, the second bolded portion can be (reasonably in my opinion) be interpreted as contrasting exponential time with polynomial (deterministic) time; but that's not entirely clear, and even less well-justified would be an interpretation of that passage as contrasting P and NP.

  • Second, Godel's question is phrased as a query about the growth rate of an individual function; although it's tempting to identify that function with a complexity class, that instinct reflects our post-Cook intuition, and I don't think there's an obvious reason to interpret that intuition into Godel when reading the above letter.

  • Finally, it's dubious whether 1956 constitutes "pre-modern computers": while our modern world of "everyday computers" of course came much later, by 1956 we already had the following (and these are just the events which are obviously important to me as an outsider - presumably those who actually know something about the subject can add many more):

    • UNIVAC 1101, capable of storing and executing programs in/from memory

    • Electric, mass-produced computers from IBM

    • Some early computer translation (from/to Russian)

    • Serious improvements in memory (e.g. magnetic core RAM)

    • Some computers with real-time graphics

So my reaction to Godel's letter above is that while he'd discovered the philosophical motivation for P vs. NP, and done so very early in the modern history of computers, he hadn't quite predated modern computing and more importantly had articulated the problem too imprecisely to count as really hitting upon it. But of course, one's mileage may vary - lots of computery stuff happened since 1956(!), and I don't think it's too ridiculous to assume Godel saw quite far past his immediate technical problem to the broader, more fundamental problem waiting in the wings.