What seemingly innocuous results in mathematics require advanced proofs?

16.8k Views Asked by At

I'm interested in finding a collection of basic results in mathematics that require rather advanced methods of proof. In this list we're not interested in basic results that have tedious simple proofs which can be shorted through more advanced methods but basic results that necessarily require advanced methods.

I appreciate the question asked here is very similar to another question asked on this website: It looks straightforward, but actually it isn't. Thank you for pointing this out. However, in my opinion, it does differ significantly (this is debatable). The main goal of the this discussion was to find examples that are easily digestible to non-advanced students of mathematics and related disciplines. This, I hope, will spur discussion of the dichotomy between what is considered trivial from a mathematics perspective and what may be considered intuitive. Quite often less experience students tend to gloss over fairly intuitive results under the assumption the proof follows easily. This I hope will be a good resource to show it is not the case.

In particular, I was hoping to find a list of problems that may seem intuitive on inspection, but are out of the reach of elementary methods. The statement of the theorem should be able to be understood by junior undergraduate students but the proof rather inaccessible. Can you also mention why elementary methods fail to shed any light on the problem.

Many thanks

18

There are 18 best solutions below

8
On

Jordan Curve Theorem Any continuous simple closed curve in the plane, separates the plane into two disjoint regions, the inside and the outside. For a long time this result was considered so obvious that no one bothered to state the theorem, let alone prove it.

https://people.math.osu.edu/fiedorowicz.1/math655/Jordan.html


Hilbert's tenth problem is the tenth on the list of mathematical problems that the German mathematician David Hilbert posed in 1900. It is the challenge to provide a general algorithm which, for any given Diophantine equation (a polynomial equation with integer coefficients and a finite number of unknowns) can decide whether the equation has a solution with all unknowns taking integer values.

https://en.wikipedia.org/wiki/Hilbert%27s_tenth_problem

In this case it took advanced methods to show that there is no such algorithm.


Around 1637, Fermat wrote [in Latin] in the margin of a book that the [more] general equation $a^n + b^n = c^n$ had no solutions in positive integers, if $n$ is an integer greater than $2$. Although he claimed to have a general proof of his conjecture, Fermat left no details of his proof, and no proof by him has ever been found. His claim was discovered some 30 years later, after his death. This claim, which came to be known as Fermat's Last Theorem, stood unsolved in mathematics for the following three and a half centuries.

https://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem


The Kepler conjecture, named after the 17th-century mathematician and astronomer Johannes Kepler, is a mathematical theorem about sphere packing in three-dimensional Euclidean space. It says that no arrangement of equally sized spheres filling space has a greater average density than that of the cubic close packing (face-centered cubic) and hexagonal close packing arrangements. The density of these arrangements is around 74.05%.

https://en.wikipedia.org/wiki/Kepler_conjecture


$\pi$ is transcendendental

This is easy to state, was suspected well before it was proved, and isn't easy to prove.

We think but don't know that $\pi$ is normal.

How hard is the proof of $\pi$ or $e$ being transcendental?

https://mathoverflow.net/questions/34055/transcendence-of-pi

17
On

The four color theorem is easy to state and intuitively obvious even to most young children. It's proof requires checking 1936 different cases which makes it difficult but also required the discharging method, a proof technique created for coloring type problems. Another perk is that it's easy to show 3 colors is insufficient and that five color theorem is a relatively easy proof by contradiction accessible to most high school graduates so it's just the four coloring case that's difficult.

2
On

The Kepler Conjecture No arrangement of equally sized spheres filling space has a greater average density than that of the cubic close packing and hexagonal close packing arrangements.

Consider a plane with a compact arrangement of spheres. The conjecture roughly says that to form the next layer of the packing, for any three neighbouring spheres, place a sphere in the hollow between them. This gives the densest possible packing.

The conjecture was stated by Johannes Kepler in 1611. A formal proof was finally published May 29 this year. It can be found here. It is a proof by exhaustion that requires solving around 100,000 linear programming problems.

6
On

One all-time favorite is the following principle:

If set $A$ has larger cardinality than set $B$ then set $2^A$ has larger cardinality than set $2^B$.

Though seemingly innocuous, this is actually independent of ZFC.

Hamkins refers to this as the Powerset Size Axiom (PSA) in

Hamkins, Joel David The set-theoretic multiverse. Rev. Symb. Log. 5 (2012), no. 3, 416–449, arxiv: 1108.4223.

4
On

Fermat's Last Theorem states that there are no positive integer solutions $a,b,c$ to $a^n+b^n=c^n$ for $n>2$.

This one irked mathematicians for 350 years until the theory of elliptic curves was developed to the point of producing a solution. Adding to the frustration was the fact that Fermat claimed to have an elegant proof of the result, which was lost to history because he couldn't fit it in the margins of the book wherein he collected his number theoretic observations.


Topology also furnishes a whole bunch of examples:

The Brouwer Fixed Point Theorem states that any continuous mapping $f$ from a disk to itself has a fixed point, i.e. a point $x$ such $f(x)=x$. It's not too hard to convince yourself this should be true by playing around with a rubber disk, but most proofs of this theorem are non-constructive and require algebraic topology. (A notable exception to this is a constructive proof using Sperner's Lemma.)

The Borsuk Ulam Theorem states (in two dimensions) that for any two real continuous functions $f$ and $g$ on a sphere, there are two antipodal points $x$ and $-x$ of the sphere where $f(x)=f(-x)$ and $g(x)=g(-x)$. So, for example, there are always two antipodal points on the Earth which have equal temperatures and equal barometric pressures. Proving that there is such a pair of points for which just $f(x)=f(-x)$ is relatively easy (relying only on the intermediate value theorem), but getting the same pair of points to work for both $f$ and $g$ again relies on algebraic topology (or combinatorial equivalents).

7
On

Monsky's theorem: it is not possible to partition a square into an odd number of triangles having equal areas.

Despite the statement of the theorem being deceivingly elementary, its proof requires rather involved combinatorial and algebraic arguments.

2
On

The Kissing Number Problem asks, given a solid sphere, how many solid spheres of the same size can be placed tangent to it. For example, how many billiard balls can you place touching the surface of another billiard ball before you run out of space.

Trying it yourself, you'll pretty quickly come to the conclusion that 12 is the maximum. There just doesn't seem to be a way to fit a 13th in there, even though there is quite a bit of extra space. And mathematically, it's easy to construct a solution for 12 (inscribe icosahedron, place on the vertices) and prove that 14 is impossible (triangulate the tangent points--there's not enough surface area on the sphere for them).

But a full proof that 13 is impossible wasn't solved until 1953, even though the problem dated back to Isaac Newton.

6
On

To my taste, the extreme value theorem is intuitively obvious; if $f: [a,b] \rightarrow \mathbb{R}$ is continuous then there exist $c,d \in [a,b]$ for which $f(c) \leq f(x) \leq f(d)$ for each $x \in [a,b]$. In words, if we consider a continuous function on $[a,b]$ then it reaches both a minimum and maximum value. Or, in precalculus terms:

A graph drawn without lifting the pen off the paper over a finite interval attains a largest and smallest value at least once in the graph.

The proof of this is not usually given in the introductory course as it requires somewhat technical attention to the completeness of the real number system. From a topological perspective, this proof requires the compactness of closed intervals.

2
On

The Prime Number Theorem, so well known it's referred to as "PNT." There are lots of primes in the low numbers, but less and less additional primes as you move up. Another way of stating it is: the larger a number is, the lower its chances of being prime, because larger numbers have more potential factors, rendering them composite. Specifically, the number of primes between 1 & 1000, 1000 & 2000, 2000 & 3000, ... 9000 & 10000 are, respectively, 168,135,127,120,119,114,117,109,108,112. The count gets smaller, but not smoothly. It occasionally jumps up a little. This bracketed count is known as the prime density. The PNT states that the density decreases forever, never peters out, and the decrease becomes increasingly smooth. If $PI(n)$ is the number of primes less than $n$ (an integer), then $PI(n) \sim {n \over \log n}$, where $\log$ is the natural logarithm. The formula $n \over \log n$ becomes more and more accurate as n increases. If the density were linear, eg, if for any range about one tenth of integers were prime, we'd expect that $PI(n) = n/10$. But the denominator isn't 10, or any other constant, but rather a growing value, $\log n$. PNT was first proved in the late 1800s using very 'complex' methods (pun intended.) Not until the 1950s was there an 'elementary' proof, by Selberg and Paul Erdős. See "Prime number theorem" in, eg, Wikipedia.

4
On

The Hairy Ball Theorem: you cannot comb a hairy ball flat without created a cowlick, or: a continuous tangent vector field on the sphere must vanish somewhere.


          BabyHead
          Baby's head photo from Surrey Physics Blog.
One can comb a hairy torus, however.

John Milnor, "Analytic proofs of the 'hairy ball theorem'", American Math. Monthly, 85 (1978), 521-524.

3
On

Not a "result" as such, and tongue-in-cheek, but I guess many of us share the feeling that "the axiom of choice is obviously true, while Zorn's lemma is obviously false...". Yet they are equivalent. :-)

11
On

Dirichlet's theorem, that if $a,b$ are co-prime natural numbers then $\{an+b: n\in \mathbb N\}$ contains infinitely many primes, was one of the first (if not the first) important result in Number Theory that uses Analysis in the proof.

The irrationality of $\pi$ was shown first by Lambert in the mid-1700's. Niven's remarkably short 20th-century proof is a gem. Both use calculus.

It is known that $e$ and $\pi$ are not Liouville numbers. Perhaps someone who knows something about the proofs could comment on this.

In 2013 a proof was given that, if $p_n$ is the $n$th prime , then $\lim_{m\to \infty}\inf_{n>m}(p_{n+1}-p_n)<7\cdot 10^7.$ Further work has reduced the number from $7\cdot 10^7$ to a value below $300.$ I am not familiar with the proof, but I doubt that it's elementary. No upper bound for $\lim_{m\to \infty}\inf_{n>m}(p_{n+1}-p_n)$ was previously known to exist. Ideally we would like to prove an exact value of $2$ for the $\lim \inf$ (The Twin Prime Conjecture).

Theorem.(Liouville). A bounded entire function $f:\mathbb C\to \mathbb C$ is constant. Proved by considering the formula $f'(z)=(2\pi i)^{-1}\int_{|y-z|=r} f(z)(y-z)^{-2}dy$ as $r\to \infty.$ This theorem can be proved by elementary means by first showing that if $g$ is an entire function then $|g|$ cannot have a non-$0$ local minima, using the power series for $g,$ but that requires showing that $g$ is indeed equal to its power series expansion about any center, which (as far as I know) needs the use of Cauchy-Riemann integral forms.

The Banach-Tarski "Paradox" may qualify. It requires the use of non-Lebesgue sets.

One of Hilbert's famous problem-set was: For $n\geq 2,$ if the polynomial $p:\mathbb R^n\to \mathbb R$ is never negative, is $p$ equal to a finite sum of squares of $rational$ functions from $\mathbb R^n$ to $\mathbb R$? For $n=1$ it is elementary (and simple) that $p$ is a finite sum of squares of polynomials, and there are some simple examples with $n=2$ and $n=3$ where p is not a finite sum of squares of polynomials. Hilbert's problem turns out to be (essentially) the same as showing that the theory of real-closed fields is sub-model complete.

Many conjectures were found to be independent of $ZFC$ by use of Forcing (including Iterated Forcing) and Godel's constructible class $L,$ or were found to be equivalent to, or equi-consistent with, the existence of certain "large cardinals": Examples:

  1. The Suslin (Souslin) conjecture: Can a c.c.c. linear topological space be non-separable ? (Originally, as it is easily seen that an infinite, connected, order-dense, separable linear space without end-points is order-isomorphic to $\mathbb R,$ it was asked whether separable could be weakened to c.c.c.). Arthur Jensen showed that $V=L$ implies "Yes". And Iterated Forcing can show that "No" is equi-consistent with $ZFC.$

  2. Can Lebesgue measure be extended to a countably additive measure whose domain is all sets of reals? This was found (Solovay) to be equi-consistent with the existence of a measurable cardinal.

Finally, I don't know whether there is a "one-dimensional" proof that the value of $J=\int_{-\infty}^{\infty}e^{-x^2}dx$ is $J=\sqrt {\pi}\;$. We have $J^2=$ $\int_{-\infty}^{\infty} e^{-x^2}dx \cdot \int_{-\infty}^{\infty}e^{-y^2}dy=$ $\int_{(x,y)\in \mathbb R^2}(e^{-x^2-y^2})dxdy.$ Now switch to polar co-ordinates.

2
On

The Goldbach Conjecture is still unproven and therefore in theory could turn out to be false. "Every even integer greater than 2 is the sum of two primes." So if someday proven this would be a good example.

0
On

Here are some simle results i believe you want to include in your list:

We cannot square the circle and double the cube and trisect an angle with with straightedge and compass.

These results can be proved with Field Theory.

$e$ and $\pi$ are not rational numbers.

.

Every vector space has a basis

This requires Zorn's Lemma to be proved

4
On

Perhaps the parallel postulate?

In a plane, given a line and a point not on it, at most one line parallel to the given line can be drawn through the point

It was only through trying to prove this "obvious" theorem that we discovered non-Euclidean geometries.

1
On

In order to complete the Fundamental Theorem of Calculus, we need to prove

(1). If $f:\mathbb R\to \mathbb R$ is differentiable and $f'=0$ everywhere then $f$ is constant.

For when we have reached the result that if $f'$ is continuous then $\int_0^xf'(y)dy $ is an anti-derivative of $f',$ we cannot show that $f(x)-f(0)=\int_0^xf'(y)dy$ without (1). Although (1) seems obvious, proving it rigorously without circularity (i.e. without applying the whole Fundamental Theorem ) is surprisingly technical: We can prove and apply the MVT or Rolle's Theorem ( which are usually presented later in a calculus course) or we can apply other techniques that a beginner might find hard to fathom.

0
On

Two which involve derivatives and integrals:

1) Lioville's Theorem that not all elementary functions have elementary antiderivatives. It is easy for undergraduates to appreciate the asymmetry between differentiation and antidifferentiation whereby something like $e^{-x^2}$ is trivial to differentiate but seemingly impossible to integrate. They can understand the definition of elementary function and see how roughly a third of Calc I implicitly constitutes a proof that the derivative of an elementary function is elementary. They can understand the statement that the antiderivative of an elementary function isn't always elementary. But proving this involves differential algebra, which probably requires a full year of abstract algebra to even understand the basic definitions, hence would be inaccessible to all but the most advanced undergraduates.

2) The fact that in some sense most continuous functions are nowhere differentiable. The existence of even a single continuous nowhere differentiable function such as the Weierstrass function would be a borderline example of the sort of thing you are asking about. It would be hard to for an undergraduate to understand, but should still be accessible to someone who has had real analysis. The way in which such functions are in some sense the rule rather than the exception is what requires advanced methods. This can be phrased in several ways. At its simplest, you can show that all continuous functions on e.g. $[0,1]$ can be approximately arbitrarily closely by a continuous nowhere differentiable functions. This can be stated precisely to an undergraduate but not proved at that level. Other ways of phrasing it involve notions of topological category or measure theory and thus a careful statement of the results would require advanced concepts, though you could say something like "According to some mathematical definitions of 'most', most continuous functions are nowhere differentiable."

0
On

I am surprised that no one has mentioned the algebraic solution of polynomial equations of degree greater than $2$. It would seem obvious to students that the solution of a cubic equation is doable, although it is a little tricky. Intuition would lead students to assuming there are solutions to quartics, quintics and even higher degree polynomial equations. Quartics turn out to be very messy and as we know solving quintics and higher can be shown to be impossible by using advanced math.