What are some examples of mathematical facts that had once been open problems for a significant amount of time and thought hard or unsolvable by contemporary methods, but were then unexpectedly solved thanks to some out-of-the-box flash of genius, and the proof is actually short (say, one page or so) and uses elementary mathematics only?
Past open problems with sudden and easy-to-understand solutions
6.9k Views Asked by Damian Reding https://math.techqa.club/user/damian-reding/detail AtThere are 19 best solutions below
The $\mathcal{AKS}$ (Agrawal, Kayal, Saxena) algorithm, which proves that we can answer if a number is prime or not in polynomial time. It has been found in 2003 and is said "reachable by ordinary man" in reason of the background it needs to be understood. More info here (wiki) and here (the paper).
Euler has stated the theorem but never managed to prove it, and it took Gauss many years to prove this theorem, and right now we have over 200 different proofs, some of which could be explained in an hour long lecture.
Theorem: transcendental numbers exist and there are (uncountably) infinitely many of them.
The existence of transcendental numbers had been conjectured for over 100 years before Liouville constructed one in 1844. Other numbers such as $e$ were shown to be transcendental one by one. Cantor was able to prove their existence with ease:
Proof: the algebraic numbers are countable and the real numbers are uncountable.
Some people thought for hundreds of years that the Euclidean parallel postulate could be proven from the other four axioms of Euclidean geometry. Giovanni Saccheri even wrote a book about it – Euclides ab omni naevo vindicatus (Euclid Freed of Every Flaw).
However, with the discovery of hyperbolic geometry in 1826 by Nikolai Lobachevsky, the conjecture was suddenly disproven, and hyperbolic geometry is not very hard to understand.
The problem was one of the most important problems in geometry in that time.
The integral of $\sec x$ stumped mathematicians in the mid-seventeenth century for quite a while until, in a flash of insight, Isaac Barrow showed that the following can be done:
$$\int \sec x \,\mathrm{d}x= \int \frac{1}{\cos x} \, \mathrm{d}x=\int \frac{ \cos x}{\cos^2 x} \, \mathrm{d}x=\int \frac{ \cos x}{1-\sin^2 x} \, \mathrm{d}x.$$
Using $u$-substitution and letting $u=\sin x$, the integral transforms to
$$\int \frac{1}{1-u^2} \, \mathrm{d}u,$$
which is easily evaluated by partial fractions.
The Quillen-Suslin theorem, which states that any f.g. projective module over a polynomial ring $k[X_1, \dots, X_n]$ is free, was originally an open conjecture of Serre. Quillen even won a Fields medal in part for his part in the proof. Later on, Vaserstein gave a proof that was short and simple enough to fit in as an appendix to Lang's "Algebra," a standard graduate-level text.
Perhaps this example qualifies.
Victor Klee posed the question of how many vertex "guards" are sometimes necessary and always sufficient to see the interior of a simple polygon in the plane. This problem is now sometimes called the art gallery problem. V. Chvatal found a nice proof relatively soon after the problem was publicized but a surprisingly simple and appealing proof was found by Steve Fisk a few years later. Klee's original problem has been generalized in many ways and has led to a huge literature including new methods to solve unrelated problems, and related problems that are yet to be resolved.
Algebraic solution of the cubic equation was a major open problem for millenia, counting the time from the Greeks or from much earlier solutions of quadratic equations.
The formula can be derived in a few lines, using modern notation.
This is the largest ratio of (age of the problem)/(length of solution) I can think of from mathematical history.
Another large ratio is Euler's sum of powers conjecture where a complete solution to a problem raised in 1769 can be written in $22$ characters: $$27^5 + 84^5 + 110^5 + 133^5 = 144^5$$
This is an incomplete answer. There is a problem somehow linked to Erdos that goes something like this: a set of points has that a line paso get through 2 distinct points in the set also passed through a third distinct one; prove that the set is infinite. (Something like that...) anyway there was a really long solution to this problem and soon after a 4-liner using the external principle. Something like 'find the line and point with the shortest distance', the a proof by contradiction since in any finite set there would be a (many) minimum(/a).
Our complex analysis professor told us that huge amounts of literature was written studying analytic functions that are bounded. Then came https://en.wikipedia.org/wiki/Liouville%27s_theorem_(complex_analysis) like a hammer. Super easy proof, (according to my professor) very unexpected result at the time.
I haven't found a verification of this anecdote, but fun story in any case.
The proof of Apéry's theorem is elementary in the sense of requiring only very old techniques (A Proof thet Euler Missed).
The imposibility of the duplication of the cube and the trisection of an angle are easy consequences of elementary field theory.
For a long while the Stanley-Wilf conjecture was one of the most prominent open problems in enumerative combinatorics, until it was resolved with an elementary one-page proof by Marcus and Tardos.
A permutation $\sigma$ on $\{1, \ldots, n\}$ is said to contain a permutation $\pi$ on $\{1, \ldots, k\}$ if there exist integers $1 \le i_1< \ldots< i_k \leq n$ such that $\sigma(i_a) <\sigma(i_b)$ if and only if $\pi(a) < \pi(b)$. If $\sigma$ does not contain $\pi$, we say that $\sigma$ avoids $\pi$. Let $S_n(\pi)$ be the number of permutations on $\{1, \ldots, n\}$ that avoid $\pi$. The Stanley-Wilf conjecture is that for all permutations $\pi$ there exists a constant $c_\pi$ such that, for all $n$, $S_n(\pi) \le c_\pi^n$.
Marcus and Tardos proved the related Furedi-Hajnal conjecture with a simple but very clever pigeonhole argument. The Furedi-Hajnal conjecture was already known to imply Stanley-Wilf: the one-paragraph argument by Klazar can also be found in the Marcus-Tardos paper.
From "Polygonal Rooms Not Illuminable from Every Point" by George W. Tokarsky, The American Mathematical Monthly Vol. 102, No. 10 (Dec., 1995), pp. 867-879:
Imagine two people in a dark room with many turns and cul-de-sacs. Assuming that the walls, floors and ceilings are constructed of reflective material, can one person strike a match and be seen by the other after repeated reflections, no matter where the two are located?
This problem has been attributed to Ernst Strauss in the early 1950's, and has remained open for over forty years. It was first published by Victor Klee in 1969. [...]
In this article, we will settle the above problem in the negative. We will as well give elementary techniques for constructing rooms, both in the plane and in three-space, which are not illuminable from every point.
Hilbert's basis theorem become a generalized solution to a problem that a lot of mathematicians had struggled with for a long time:
If $R$ is a Noetherian ring, a ring where all ideals are finitely generated, then so is $R[X]$, the ring of polynomials over $X$ with coefficients in $R$.
Here is the first information-theoretic incompleteness theorem. Consider an N-bit formal axiomatic system. There is a computer program of size N which does not halt, but one cannot prove this within the formal axiomatic system. On the other hand, N bits of axioms can permit one to deduce precisely which programs of size less than N halt and which ones do not. Here are two different N-bit axioms which do this. If God tells one how many different programs of size less than N halt, this can be expressed as an N-bit base-two numeral, and from it one could eventually deduce which of these programs halt and which do not. An alternative divine revelation would be knowing that program of size less than N which takes longest to halt. (In the current context, programs have all input contained within them.)
The proof of this closely resembles G. G. Berry's paradox of
the first natural number which cannot be named in less than a billion words,'' published by Russell at the turn of the century (Russell, 1967). The version of Berry's paradox that will do the trick is
that object having the shortest proof that its algorithmic information content is greater than a billion bits.'' More precisely, ``that object having the shortest proof within the following formal axiomatic system that its information content is greater than the information content of the formal axiomatic system: ...,'' where the dots are to be filled in with a complete description of the formal axiomatic system in question.
- A universal approach and proof to all self-referential paradoxes, from Cantor to Goedel and Turing, through the work on Cartesian Closed categories of Lawvere, Eilenberg and others on category theory.
Following F. William Lawvere, we show that many self-referential paradoxes, incompleteness theorems and fixed point theorems fall out of the same simple scheme. We demonstrate these similarities by showing how this simple scheme encompasses the semantic paradoxes, and how they arise as diagonal arguments and fixed point theorems in logic, computability theory, complexity theory and formal language theory.
Theorem 1 (Generalized NFL theorem). Let H be an arbitrary (randomized or deterministic) search heuristic for functions $f \in F > \subset F_{A,B}$ where $F$ is closed under permutations. Let $r(H)$ be the average (under the uniform distribution on $F$) of the expected runtimes of $H$ on $F$. Then $r(H)$ is a value independent of $H$, i.e., $r(H)$ is the same for all $H$.
[..]The generalized NFL theorem is by no means surprising. If a class of functions does not change by any permutation on the input space, there is no structure which can be used for search. Hence, all search strategies show the same behavior.
(there is another reference for a simple proof of NFL that also displays its elementary character, but cannot seem to find it at this point)
It is not completely elementary, but Abel's proof of the Abel-Ruffini theorem is quite short, 6 pages, and can with a bit of introduction be understood by someone without a degree in mathematics. The Abel-Ruffini theorem states that there is no general solution in radicals to a degree 5 or higher polynomial equation.
The Abel-Ruffini theorem had been open for over two hundred years and was one of the central problems in mathematics of that time, akin to the Riemann Hypothesis now. For degree 2, a formula had been known since 2000 BC to the Babylonians. For degree 3 and 4, formulas had been discovered 200 years earlier. The search for a formula of degree 5 had been long in progress.