What are the most overpowered theorems in mathematics?

30k Views Asked by At

What are the most overpowered theorems in mathematics?

By "overpowered," I mean theorems that allow disproportionately strong conclusions to be drawn from minimal / relatively simple assumptions. I'm looking for the biggest guns a research mathematician can wield.

This is different from "proof nukes" (that is, applying advanced results to solve much simpler problems). It is also not the same as requesting very important theorems, since while those are very beautiful and high level and motivate the creation of entire new disciplines of mathematics, they aren't always commonly used to prove other things (e.g. FLT), and if they are they tend to have more elaborate conditions that are more proportional to the conclusions (e.g. classification of finite simple groups).

Answers should contain the name and statement of the theorem (if it is named), the discipline(s) it comes from, and a brief discussion of why the theorem is so good. I'll start with an example.

The Feit-Thompson Theorem. All finite groups of odd order are solvable.


Solvability is an incredibly strong condition in that it immediately eliminates all the bizarre, chaotic things that can happen in finite nonabelian simple groups. Solvable groups have terminating central and derived series, a composition series made of cyclic groups of prime order, a full set of Hall subgroups, and so on. The fact that we can read off all that structure simply by looking at the parity of a group's order is amazing.

25

There are 25 best solutions below

7
On

Look into complex analysis. You have insanely powerful theorems all stemming from a function being analytic :).

1
On

Sylow's first, second and third theorem which are about finite groups are a powerful result in proving results about finite groups. As a nice application of this we can prove that, if $G$ is a finite group of order $pq$ where $p$ and $q$ are distinct primes and $p \lt q$, then if $p \mid q-1$ then there exists a unique non abelion group of order $pq$ in the other case the group is actually turns to be cyclic.

0
On

The contraction Mapping Theorem. It simply states if $X$ is a complete metric space and $T:X\rightarrow X$ is a contraction mapping then there is a unique fixed point. This theorem is used a lot in studying solutions in numerical analysis and ordinary and partial differential equations.

0
On

The field of complex numbers $\mathbb{C}$ is algebraically closed. The fundamental theorem of algebra is a result which we use in lot of places. It has its own importance in Mathematics.

Same we can see with any fundamental theorem in Mathematics like Fundamental Theorem of Arithmetic, Fundamental Theorem of Calculus, $\ldots$

0
On

Orbit-stabilizer. There's so many things that can be described as group actions, and tons of counting arguments reduce to it.

0
On

The existence of a primitive root in $(\mathbb{Z}/p\mathbb{Z})^{\times}$, and the Well-Ordering Principle, if you're talking about number theory. The former gives a lot of structure to the prime multiplicative groups, while well-ordering can prove linear diophantine equations, $(a,b)=1 \land a|bc\Rightarrow a|c$, and unique prime factorization in the natural numbers. Also this video by the author of Brown Sharpie is relevant.

Oh! Don't forget about AM-GM, probably the most used inequality theorem ever.

3
On

I would include the pigeonhole principle in such a list. This principle is one that is easy to explain to a 4 year old, and at the same time it is used throughout all areas of mathematical research. As an example see Proof of Van der Waerden's theorem (in a special case).

0
On

Donaldson's Theorem A: The intersection form of a closed simply-connected definite smooth 4-manifold is diagonalizable.

http://en.wikipedia.org/wiki/Donaldson%27s_theorem

The intersection form of a 4-manifold contains a large amount of topological information about a manifold, particularly when it is simply-connected. Donaldson's theorem thus tells us that topologically a closed definite smooth 4-manifold can't be too wild. For example, when this is combined with Freedman's work (which would also be a good candidate for an answer to this question), one gets the existence of non-smoothable manifolds.

2
On

The Classification of Closed Surfaces - A closed surface is fully determined by its Euler characteristic (or genus) and whether it is orientable. Further, every closed surface is either

  • The sphere $S^2$,
  • A finite connect sum of tori $T_n=T^2\#T^2\#\cdots\#T^2$, or
  • A finite connect sum of real projective planes $P_n=\mathbb{R}P^2\#\mathbb{R}P^2\#\cdots \mathbb{R}P^2$.

The fact that the homeomorphism type of a closed surface is determined by just two very simple invariants is a very powerful fact indeed. The Euler characteristic is a very easy to calculate invariant and orientability is an especially easy concept to understand and determine for a surface. To think that all closed surfaces (the definition of which is, at first glace, seemingly quite broad) are determined by these criteria is quite remarkable.

It's also worth noting that the closely related statement for $2$-manifolds with boundary and marked points is just as simple to state and even more powerful.

0
On

The inverse function theorem (for $C^1$ multivariable maps between open sets of euclidean space). You just check that at some point, some determinant is non zero (which is easily computed) and you get that the maps is locally a diffeomorphism at that point!

4
On

Gödel's incompleteness theorems

Hypotheses : you have a (recursively enumerable) consistent theory $T$ that contains integers, addition and multiplication (not big hypotheses in math !)

Conclusions :

  1. there is a formula $F$ that is true in some model of $T$ but there is no proof of $F$ in $T$.
  2. The consistency of $T$ is not provable in $T$

All hypothesis are required : counter-example are well know without multiplication for example. I'm amazed that basic arithmetic operations can have such strong consequences.

2
On

Cantor's Theorem.

For every set $A$, $|A|<|\mathcal P(A)|$.

7
On

Gödel's Completeness Theorem.

If $T$ is a first order theory, then $T\vdash\varphi$ if and only if $T\implies\varphi$.

We use this theorem so much, we don't even notice anymore.

The theorem comes from the field of logic. Whenever we want to prove that the theory of groups prove a statement, we begin by considering an arbitrary group, and we show that the group satisfies that statement. Bam, completeness!

If mathematicians consider their work within the context of $\sf ZFC$ (even implicitly), then proving something about $L_2[0,1]$ is in fact proving that $\sf ZFC$ proves that the object which has this and that properties also satisfies that thing. In essence, this is another very implicit use of completeness.

6
On

The Stone–Weierstrass theorem.

Every continuous function $f: [0,1] \rightarrow \mathbb{R}$ can be uniformly approximated by polynomial functions.

It is 'overpowered' because one only needs to have that $f$ is continuous and we get that we have an approximation of $f$ with polynomials, which behave very nice in many regards.

To give an example, you can use it to prove Brouwers Fixed Point Theorem in the general case from the special case where $f$ is twice differentiable.

1
On

Lagrange's Theorem makes proving some aspects of group theory insanely simple. Like, for instance, the proof that any group of prime order is cyclic.

0
On

I hope powerful Mathematics Techniques will also be accepted here.

I am thinking about Fourier Transform and Fourier Series. For me it seems powerful tool which used in analysis very much.

Similarly Laplace transform kind of transforms are used well in solving differential equations efficiently.

2
On

Zorn's lemma.

True, it is equivalent to the axiom of choice. However it is also a theorem of $\sf ZFC$. The uses of this lemma are uncanny and can be found anywhere in mathematics where infinite objects (of unlimited cardinality) are involved.

If $(P,\leq)$ is a partial order in which every chain has an upper bound, then there exists a maximal element in $P$.

1
On

Stokes' theorem - The integral of a differential form $\omega$ over the boundary of some orientable manifold $\Omega$ is equal to the integral of its exterior derivative $d\omega$ over the whole of $\Omega$, i.e. $$\int_{\partial\Omega}\omega=\int_\Omega d\omega$$

It's amazing and beautiful not only because of its brevity and elegance but also because it unifies from fundamental theorem of calculus, Green's theorem to divergence theorem, even in some generalization, Cauchy's theorem in complex analysis. And we are very glad to accept it from geometric view since it embodies some sort of self-cancellation and symmetry. Moreover, Stokes' theorem is applied to varieties of areas, especially in physics like electromagnetics, hydrodynamics and so on.

1
On

The CENTRAL LIMIT THEOREM. Ever since Laplace proved it, it forms the basis of Probability Theory and Statistics. We can make sense of the world with it, it is simply indispensable and all that thanks to the geniuses of Laplace and Gauss.

Open any probability theory book and you will immediately stumble upon CLT. Under very general conditions we can approximate any distribution with the nice Bell-curve even in not so large samples. In applied research this is a blessing unparalleled with any other theorem.

It is no exaggeration to claim that modern Statistics have been built on these premises and CLT should make it to the top of your list, top 5 minimum ;)

4
On

Rice's Theorem is extremely powerful one. It basically says that you cannot invent an algorithm which, taking programming source code as an argument, would discover any property of the function computed by that algorithm.

Example: I really need to make a analyzer, which would take any program and tell me whether that program prints $0$ being given $0$ for input or not. Rice's theorem says I will never have such analyzer.

0
On

Given a short exact sequence $0\to A\to B\to C\to 0$ of chain complexes in an abelian category, there exists a long exact sequence $\cdots\to H_{i+1}(C)\to H_i(A)\to H_i(B)\to H_i(C)\to\cdots$ of their homology groups. This gives the long exact sequence for derived functors, and thus allows the production of a myriad of long exact sequences that give immense computational power based on the usually easy to check condition of short exact sequence.

1
On

The changing order of integration in multiple integrals.It is the key of almost all applied mathematics to science.

5
On

My lingebra professor remarked, that the well known identity $Rank(A) = Rank(A^T)$ for any matrix $A$ can be derived with no special assumptions. (But actually, $A$ musn't have elements from finite field.)

1
On

Convex implies continuous.

The existence of non (Lebesgue) measurable sets.

The halting problem.

Yoneda's lemma.

3
On

Row-rank equals column-rank

If you have a matrix, then the dimension of the column-space is the same as the dimension of the row-space. Doesn't it sound amazing? I think, because of its applications, it should be called the fundamental theorem of linear algebra.