Mathematical ideas that took long to define rigorously

20.4k Views Asked by At

It often happens in mathematics that the answer to a problem is "known" long before anybody knows how to prove it. (Some examples of contemporary interest are among the Millennium Prize problems: E.g. Yang-Mills existence is widely believed to be true based on ideas from physics, and the Riemann hypothesis is widely believed to be true because it would be an awful shame if it wasn't. Another good example is Schramm–Loewner evolution, where again the answer was anticipated by ideas from physics.)

More rare are the instances where an abstract mathematical "idea" floats around for many years before even a rigorous definition or interpretation can be developed to describe the idea. An example of this is umbral calculus, where a mysterious technique for proving properties of certain sequences existed for over a century before anybody understood why the technique worked, in a rigorous way.

I find these instances of mathematical ideas without rigorous interpretation fascinating, because they seem to often lead to the development of radically new branches of mathematics$^1$. What are further examples of this type?

I am mainly interested in historical examples, but contemporary ones (i.e. ideas which have yet to be rigorously formulated) are also welcome.


  1. Footnote: I have some specific examples in mind that I will share as an answer, if nobody else does.
19

There are 19 best solutions below

2
On

Natural transformations are a "natural" example of this. Mathematicians knew for a long time that certain maps--e.g. the canonical isomorphism between a finite-dimensional vector space and its double dual, or the identifications among the varied definitions of homology groups--were more special than others. The desire to have a rigorous definition of "natural" in this context led Eilenberg and Mac Lane to develop category theory. As Mac Lane allegedly put it:

"I didn't invent categories to study functors; I invented them to study natural transformations."

4
On

At the risk of having it called an obvious example, I submit Euclid's parallel postulate. It was formulated in his Elements cca. 300 B.C., then rationalized (including challenges and proof attempts) for many centuries before Saccheri laid down the $\,3\,$ alternatives of one/none/multiple parallels to a line through a given point in the $\,18^{th}\,$ century, then it took another $100$ years for the non-Euclidean geometries to be formalized by Lobachevsky and Bolyai.

8
On

Continuity is an example of a concept that was unclear for some time and also defined differently from what we now consider its "correct" definition. See this source (Israel Kleiner, Excursions in the History of Mathematics, pp. 142-3) for example.

In the eighteenth century, Euler did define a notion of "continuity" to distinguish between functions as analytic expressions and the new types of functions which emerged from the vibrating-string debate. Thus a continuous function was one given by a single analytic expression, while functions given by several analytic expressions or freely drawn curves were considered discontinuous. For example, to Euler the function

$$ f(x) = \left\{\begin{array}{ll} x^2 & x > 0 \\ x & x \leq 0 \end{array}\right. $$

was discontinuous, while the function comprising the two branches of a hyperbola was considered continuous (!) since it was given by the single analytic expression $f(x) = 1/x$.

[...]

In his important Cours d'Analyse of 1821 Cauchy initiated a reappraisal and reorganization of the foundations of eighteenth-century calculus. In this work he defined continuity essentially as we understand it, although he used the then-prevailing language of infinitesimals rather than the now-accepted $\varepsilon - \delta$ formulation given by Weierstrass in the 1850s. [...]

(Note that it's useful to make explicit on what domain you're saying something is continuous; here, the author of the book is actually slipping up, since $1/x$ is in fact continuous on its natural domain $\mathbb R \setminus \{0\}$. Thanks @jkabrg)

7
On

Following from the continuity example, in which the $\epsilon$-$\delta$ formulation eventually became ubiquitous, I submit the notion of the infinitesimal. It took until Robinson in the 1950s and early 60s before we had "the right construction" of infinitesimals via ultrapowers, in a way that made infinitesimal manipulation fully rigorous as a way of dealing with the reals. They were a very useful tool for centuries before then, with (e.g.) Cauchy using them regularly, attempting to formalise them but not succeeding, and with Leibniz's calculus being defined entirely in terms of infinitesimals.

Of course, there are other systems which contain infinitesimals - for example, the field of formal Laurent series, in which the variable may be viewed as an infinitesimal - but e.g. the infinitesimal $x$ doesn't have a square root in this system, so it's not ideal as a place in which to do analysis.

7
On

Euclidean geometry. You think calculus was missing rigorous understanding? Non-Euclidean geometry? How about plain old Euclidean geometry itself? You see, even though Euclid's Elements invented rigorous mathematics, even though it pioneered the axiomatic method, even though for thousands of years it was the gold standard of logical reasoning - it wasn't actually rigorous.

The Elements is structured to seem as though it openly states its first principles (the infamous parallel postulate being one of them), and as though it proves all its propositions from those first principles. For the most part, it accomplishes the goal. In notable places, though, the proofs make use of unstated assumptions. Some proofs are blatant non-proofs: to prove side-angle-side (SAS) congruence of triangles, Euclid tells us to just "apply" one triangle to the other, moving them so that their vertices end up coinciding. There's no axiom about moving a figure onto another! Other proofs have more insidious omissions. In the diagram, does there exist any point where the circles intersect? It's "visually obvious", and Euclid assumes they intersect while proving Proposition 1, but the assumption does not follow from the axioms.

allegedly intersecting circles

In general, the Elements pays little attention to issues of whether things really intersect in the places you'd expect them to, or whether a point is really between two other points, or whether a point really lies on one side of a line or the other, etc. We all "know" these concepts, but to avoid the trap of, say, a fake proof that all triangles are isosceles, a rigorous approach to geometry must address these concepts too.

It was not until the work of Pasch, Hilbert, and others in the late 1800s and early 1900s for truly rigorous systems of synthetic geometry to be developed, with the axiomatic definition of "betweenness" being a key new fundamental idea. Only then, millennia since the journey began, were the elements of Euclidean geometry truly accounted for.

2
On

Sets. As late as the early twentieth century, Bertrand Russell showed that one leading theory of them was self-contradictory, because it led to Russell's Paradox: Does the set of all sets that do not contain themselves, contain itself? The accepted solution was ZF set theory.

Another example that jumps to mind is counting up: Peano arithmetic was axiomatized in the nineteenth century (and has been considerably revised since). Or algorithms.

Which raises the point, I guess, that we're still looking for the best foundation for mathematics itself.

8
On

The notions of real numbers themselves - as rational cuts, as equivalent rational Cauchy sequences, or as elements of the unique model for the theory of complete ordered fields - would only appear in the 19th century, despite the centrality of calculus in mathematics and other sciences.

3
On

The notion of probability has been in use since the middle ages or maybe before. But it took quite a while to formalize the probability theory and giving it a rigorous basis in the midst of 20th century. According to wikipedia:

There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation, sets are interpreted as events and probability itself as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (that is, not further analyzed) and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details.

There are other methods for quantifying uncertainty, such as the Dempster–Shafer theory or possibility theory, but those are essentially different and not compatible with the laws of probability as usually understood.

2
On

Differentiable manifolds are an example. A rigorous definition only appeared about 100 years ago, in the works of Hermann Weyl and Hassler Whitney, although they were studied long before that time. Gauss's Theorema Egregium can already be seen as a theorem about this kind of concept, although stated long before it was formaly defined.

0
On

Complex Numbers are an example of
" ..abstract mathematical "idea" floats around for many years before even a rigorous definition or interpretation can be developed to describe the idea ".

And it was much an embarrassing idea at Cardano and Bombelli times (XVI century) that took lot of imagination (sic) and mental stress to be settled.

2
On

The delta "function" showed up in a paper by Fourier - "Théorie analytique de la chaleur" in 1822.

It wasn't until ~1945 that Schwartz formally defined the delta functional as a distribution.

0
On

Weil's conception of a cohomology theory for varieties adequate enough to solve the Weil Conjectures (the "Riemann Hypothesis" for varieties over finite fields) are an example. The idea for a Weil Cohomology was formulated in 1949 by Weil, and then Grothendieck came along in the 60's with etale and $\ell$-adic cohomology, which fit Weil's criteria and allowed Deligne prove the conjectures in 1974.

25 years may not be the longest time for something mentioned here, but exploring this idea and trying to rigorously realize it definitely helped create a decent portion of 20th century math.

0
On

You might want to check out Imre Lakatos' "Proofs and Refutations", which depicts in a fictional dialog the evolution of the idea of a "polyhedron" over the centuries. His goal is to illuminate the dialectical process of definition and redefinition in mathematics, and perhaps in cognition generally.

2
On

"Computation" (or effective calculability) is still an abstract mathematical idea that floats around awaiting a rigorous definition.

There are various candidates for defining what the term could mean -- e.g. the set of strings that can be generated by a certain type of grammar, or the set of strings that can be accepted by a certain type of machine, or the set of functions that can be defined given a certain set of function-construction rules. And there are rigorous proofs of the equivalences between many of those definitions.

But we are still left with an intuition about whether those definitions are adequate. That intuition is called Church's Thesis or the Church-Turing Thesis, but it remains (merely) a thesis. We might still come up with a broader definition of what constitutes a "computation" that cannot be subsumed under the existing candidates.

1
On

Structure-preserving function.
It seems that this concept doesn't have a general definition yet. Category theory define the rules for calculations with morphisms but doesn't provide a general and formal rule what a structure-preserving function is, when the objects of the category are sets with additional structures.

For the kind of structures appearing in universal algebra it's clear enough but, for example, from an algebraic perspective, what makes continuity to the natural concept in topology?


This may provide a clue?

There is a subcategory of the category of relations as objects and relations between relations as morphisms, consisting of all relations as objects and certain relations between relations, that can be expressed by two relations, as morphisms.

Given two relations $R\subseteq A\times B$ and $R'\subseteq A'\times B'$. Some relations $r\subseteq R\times R'$ can be characterized by two relations $\alpha\subseteq A\times A'$ and $\beta\subseteq B\times B'$ so that

$((a,b),(a',b'))\in r \iff \Big((a,a')\in\alpha\wedge (b,b')\in\beta\wedge (a,b)\in R\implies (a',b')\in R'\Big)$

and if $R''\subseteq A''\times B''$, $r'\subseteq R'\times R''$, where $r'$ is characterized by $\alpha'\subseteq A'\times A''$ and $\beta'\subseteq B'\times B''$, then the composition $r'\circ r$ is characterized by the relations $\alpha'\circ\alpha\subseteq A\times A''$ and $\beta'\circ\beta\subseteq B\times B''$? (Where $\circ$ denote the composition of relations).

Suppose $A=B\times B$ and that $R\subseteq A\times B$ is the composition in a magma. Then the functions among the morphisms between two such objects defines magma morphisms $B\to B'$.

Suppose $B=\mathcal P(A)$ and that $R\subseteq A\times B$ is the relation $(a,S)\in R\iff a\in\overline{S}$ for some topology on $A$. Then the functions among the morphisms between two such objects define continuous functions $A\to A'$.

0
On

Computation seems to fall into this category - for a long time, there had been an informal notion of something like "information processing". There had, of course, been the idea of a function for a long time. There were also prototypical algorithms, even as far back as Euclid. But the general idea of a well-defined process that implements a function based on small steps did not appear until Turing defined it in his 1936 paper.

0
On

The calculus of variations was actively applied from the 17th Century and put on a firm theoretical foundation with the introduction of Banach spaces in 1920.

0
On

About Fractals (I know a fractal when I see it). What is it mathematical definition of this concept?

Till nowadays the notion of fractal does not have yet a proper mathematical definition.

Basically, a fractal is figure or shape that have a self-similarity property. the geometry of a fractal differ from one shape to another.

To this end I would like to quote a speaker at conference in Leipzig last year (2017) He was asked by an attender: Sir what is a fractals ?

His answer: I know a fractal when I see it

Patently once someone has shown you a fractal for your first time , next time you will certainly recognise a fractal just by looking at his shape no matter how different it could be from the previous one: this is a fact.

There are different kind of well know fractals : Speinski-Gasket (whose the triangle below), Mandelbrot set (see the second figure), Julia set.....

https://georgemdallas.wordpress.com/2014/05/02/what-are-fractals-and-why-should-i-care/

enter image description here enter image description here

0
On

The Egyptians and the Babylonians knew a lot of mathematical facts (e.g. the Pythagorean theorem, the quadratic formula, the volumes of prisms and pyramids, etc.) which were later proved by the Greeks.

Also, Pascal and Fermat used mathematical induction and the well ordering of the naturals (in the form of infinite descent) 250 years before Peano formalized the axioms defining the natural numbers.