Why is an ($\boldsymbol{\sigma}$-)algebra called a ($\boldsymbol{\sigma}$-)field in probability?

214 Views Asked by At

This is not strictly a math problem, but more of a question on nomenclature and history; so, if it's not appropriate for this site, let me know, and I'll ask someplace else.

Probability theory is known for its distinct nomenclature pattern that sets it apart from measure theory proper. Most of these names reflect the subject's informal beginnings and strong subsequent connections with mathematical statistics and other applications. I am listing as many of these differences as I can think off the top of my head:

measurable space --- sample space

measurable set --- event

almost everywhere --- almost surely (less commonly, almost certainly)

measurable function --- random variable

(Lebesgue) integral --- expected value

$\mathscr{L}_2$-norm --- standard deviation (not exactly, but similar)

$\mathscr{L}_2$-inner product --- covariance (again, not exactly)

weak-$\ast$ convergence --- convergence in distribution/law

($\sigma$-)algebra --- ($\sigma$-)field

Most of these probabilistic alternatives are more intuitive, except for the last one! I can't think of any empirical/pragmatic/intuitive/statistical reason to call a collection of 'events' either an 'algebra' or a 'field'. The name 'algebra', though, makes sense from a strictly mathematical point of view (the same way 'rings' are rings). So, why then do probabilists insist on 'field'?

Of course, tradition may be an answer, but I am more interested in knowing what/who started this tradition. Were 'algebra' and 'field' synonyms in the early days of Lebesgue theory, and over time, only one name survived in each of the two literatures? Do the two terms have to do with different mathematical schools, say, Moscow versus Paris, or something like that?