Is mathematical history written by the victors?

4.1k Views Asked by At

The question is the title of a 2013 publication in the Notices of the American Mathematical Society, by twelve authors (of which I am one). The contention is that traditional history of mathematics is based on the assumption of an inevitable evolution toward the real continuum-based framework as developed by Cantor, Dedekind, Weierstrass (referred to as the "great triumvirate" by Carl Boyer here) and others. Taking some seminal remarks by Felix Klein as their starting point, the authors argue that the traditional view is lopsided and empoverishes our understanding of mathematical history. Have the historians systematically underplayed the importance of the infinitesimal strand in the development of analysis? Editors are invited to submit reasoned responses based on factual historical knowledge, and refrain from answers based on opinion alone.

To be even more explicit, we ask for additional examples from history that support either Boyer's viewpoint or the NAMS article viewpoint. That is, limit the question to facts and not opinions (based on a comment by Willie Wong at meta).

Note 1. For a closely related MO thread see this.

Note 2. A reaction to the Notices article by Craig Fraser was published here.

Note 3. Another would-be victor Gray is analyzed in this MSE thread.

Note 4. The Notices article originally contained a longish section on Euler, which was eventually split off into a separate article. The article shows, using the writings of Ferraro as a case study, how an assumption of default Weierstrassian foundations deforms a scholar's vision of Euler's mathematics. The article was recently published in 2017 in Journal for General Philosophy of Science.

Note 5. A response to Craig Fraser's reaction was published in 2017 in Mat. Stud.; see this version with hyperlinks.

Note 6. Further insight into the mentality of some math historians can be gleaned from a recent (2022-23) exchange in The Mathematical Intelligencer; see the answer https://math.stackexchange.com/a/4725050/72694 below.

4

There are 4 best solutions below

32
On BEST ANSWER

Certainly the victors write the history, generally. But when the victory is so complete that there is no further threat, the victors sometimes feel they can beneficently tolerate "docile" dissent. :)

Srsly, folks: having been on various sides of such questions, at least as an interested amateur, and having wanted new-and-wacky ideas to work, and having wanted a successful return to the intuition of some of Euler's arguments ... I'd have to say that at this moment the Schwartz-Grothendieck-Bochner-Sobolev-Hilbert-Schmidt-BeppoLevi (apologies to all those I left out...) enhancement of intuitive analysis is mostly far more cost-effective than various versions of "non-standard analysis".

In brief, the ultraproduct construction and "the rules", in A. Robinson's form, are a bit tricky (for people who have external motivation... maybe lack training in model theory or set theory or...) Fat books. Even the dubious "construction of the reals" after Dedekind or Cauchy is/are less burdensome, as Rube-Goldberg as they may seem.

Nelson's "Internal Set Theory" version, as illustrated very compellingly by Alain Robert in a little book on it, as well, achieves a remarkable simplification and increased utility, in my opinion. By now, having spent some decades learning modern analysis, I do hopefully look for advantages in non-standard ideas that are not available even in the best "standard" analysis, but I cannot vouch for any ... yet.

Of course, presumably much of the "bias" is that relatively few people have been working on analysis from a non-standard viewpoint, while many-many have from a "standard" viewpoint, so the relative skewing of demonstrated advantage is not necessarily indicative...

There was a 1986 article by C. Henson and J. Keisler "on the strength of non-standard analysis", in J. Symbolic Logic, 1986, maybe cited by A. Robert?... which follows up on the idea that a well-packaged (as in Nelson) version of the set-theoretic subtley of existence of an ultraproduct is (maybe not so-) subtly stronger than the usual set-theoretic riffs we use in "doing analysis", even with AxCh as usually invoked, ... which is mostly not very serious for any specific case. I have not personally investigated this situation... but...

Again, "winning" is certainly not a reliable sign of absolute virtue. Could be a PR triumph, luck, etc. In certain arenas "winning" would be a stigma...

And certainly the excesses of the "analysis is measure theory" juggernaut are unfortunate... For that matter, a more radical opinion would be that Cantor would have found no need to invent set theory and discover problems if he'd not had a "construction of the reals".

Bottom line for me, just as one vote, one anecdotal data point: I am entirely open to non-standard methods, if they can prove themselves more effective than "standard". Yes, I've invested considerable effort to learn "standard", which, indeed, are very often badly represented in the literature, as monuments-in-the-desert to long-dead kings rather than useful viewpoints, but, nevertheless, afford some reincarnation of Euler's ideas ... albeit in different language.

That is, as a willing-to-be-an-iconoclast student of many threads, I think that (noting the bias of number-of-people working to promote and prove the utility of various viewpoints!!!) a suitably modernized (= BeppoLevi, Sobolev, Friedrichs, Schwartz, Grothendieck, et al) epsilon-delta (=classical) viewpoint can accommodate Euler's intuition adequately. So far, although Nelson's IST is much better than alternatives, I've not (yet?) seen that viewpoint produce something that was not comparably visible from the "standard" "modern" viewpoint.

27
On

To give an example of the kind of answer requested here, note that one of the first examples in the NAMS text is from David Mumford, who wrote about overcoming his own prejudice (stemming from what he was taught concerning infinitesimals) in the following terms: "In my own education, I had assumed that Enriques [and the Italians] were irrevocably stuck.… As I see it now, Enriques must be credited with a nearly complete geometric proof using, as did Grothendieck, higher order infinitesimal deformations.… Let’s be careful: he certainly had the correct ideas about infinitesimal geometry, though he had no idea at all how to make precise definitions."

I enjoyed paul garrett's answer though it is steered in a slightly different direction, namely the effectiveness of NSA in cutting-edge research, whereas my question is mostly concerned with historical interpretation and getting an accurate picture of the mathematical past.

To give another example, Fermat's procedure of adequality involves a step where Fermat drops the remaining "E" terms; he carefully chooses his terminology and does not set them equal to zero. Similar remarks apply to Leibniz. Yet historians often assume that there is a logical contradiction involved at the basis of their methods, which can be summarized in the notation of modern logic as $(dx\not=0)\wedge(dx=0)$. Such remarks often go hand-in-hand with claims that the alleged logical contradiction was finally resolved around 1870. Without detracting from the greatness of the accomplishment around 1870, such criticism of the early pioneers of the calculus may not be on target.

14
On

(This is meant as a response to a comment by Pete L. Clark on whether the history of analysis was a "linear progression". Due to its length I decided to post it as a separate answer) I agree that focusing on the term "linear" is not the issue. What does seem to be a meaningful issue is the following closely related question.

Is it accurate to view the formalisation of analysis around 1870, an extremely important development by all accounts, as having established a "true" foundation of analysis in the context of the Archimedean continuum and by eliminating infinitesimals?

An alternative view is that the success of the Archimedean formalisation in fact incorporated an aspect of failure, as well, namely a failure to formalize a ubiquitous aspect of analysis as it had been practiced since 1670: the infinitesimal.

According to the alternative view, there is not one strand but two parallel strands for the development of analysis, one in the context of an Archimedean continuum, as formalized around 1870, and one in the context of what could be called a Bernoullian continuum (Johann Bernoulli having been the first to base analysis systematically and exclusively on a system incorporating infinitesimals). This strand was not formalized until the work of Edwin Hewitt in the 1940s, Jerzy Los in the 1950s, and especially Abraham Robinson in the 1960s, but its sources are already in the work of the great pioneers of the 17th century.

To give an example, In his recent article (Gray, J.: A short life of Euler. BSHM Bulletin. Journal of the British Society for the History of Mathematics 23 (2008), no. 1, 1--12), Gray makes the following comment:

"At some point it should be admitted that Euler's attempts at explaining the foundations of calculus in terms of differentials, which are and are not zero, are dreadfully weak" (p. 6). He provides no evidence for this claim.

It seems to me that Gray's sweeping claim is coming from a "linear progression" school of thinking where Weierstrass is credited with eliminating logically faulty infinitesimals, so of course Euler who used infinitesimals galore would necessarily be "dreadfully weak" without any further explanation needed.

2
On

A recent discussion at https://math.stackexchange.com/questions/455871/cauchys-limit-concept is a good illustration of the influence of feedback-style ahistory (to borrow Grattan-Guinness's term), when Weierstrass's ideas are read into an earlier author whether or not they belong there. To be sure, there is a considerable amount of historical controversy concerning Cauchy. J. Grabiner emphasizes the importance of the germs of epsilon, delta procedures that can be found in certain arguments in Cauchy's oeuvre. However, the actual epsilon, delta definition of limit (as opposed to procedures found in certain arguments) was not introduced by Cauchy but rather by later authors (usually Weiestrass is credited even though an earlier occurrence is found in Dirichlet).

In any case, the formalisation of the epsilontic limit concept certainly does not lie with Cauchy. What Cauchy did write about limits is that a variable quantity has limit $L$ if its values indefinitely approach $L$. With Cauchy, the primitive notion is that of a variable quantity, and limits are defined in terms of the latter in a fashion almost identical to what Newton wrote a few centuries earlier. Recently young scholars like Bråting and Barany have challenged received views on Cauchy.

Meanwhile, the discussion at https://math.stackexchange.com/questions/455871/cauchys-limit-concept proceeds under the explicitly stated assumption concerning alleged "Cauchy's formalization of limits", which is contrary to fact. The assumption was not challenged by any of the participants. This indicates that the community is often not aware of the true nature of Cauchy's work in analysis, including his definition of continuity expressed in terms of infinitesimals rather than epsilon, delta.