Why do we (mostly) restrict ourselves to Latin and Greek symbols?

4.2k Views Asked by At

99% of variables, constants, etc. that I run into are named for either a Latin character (like $x$) or a Greek character (e.g. $\pi$). Sometimes I twitch a little when I have to keep two separate meanings for a symbol in my head at once.

Back during the Renaissance and prior, I can see why this would be the case (everybody who set today's conventions for notation spoke Latin or Greek). But it's 2016; thanks to globalization and technological advancements, when symbol overloading becomes an issue one could simply look at the Unicode standard (e.g. through a character map or virtual keyboard) and pick their favorite character.

So why do we (mostly) limit our choice of symbols to Latin and Greek characters?

6

There are 6 best solutions below

0
On BEST ANSWER

It's in part for historical reasons and it's in part for practical reasons. You have already figured out the historical reasons.

One practical reason is that notation is meant to convey ideas, it's not meant to be used for its own sake. If instead of asking whether your ideas are correct your readers get stuck on whether your notation is correct, or worse, what your notation even means, then the notation has failed.

Symbol overloading can be a problem, especially in a long document. Suppose you're writing a book about prime numbers. First you need $p$ and $q$ to be any odd positive primes, then you need them to be any primes in $\mathbb{Z}$ whatsoever, next you need $q - 1$ to be a multiple of $p$, then you need to them to have quadratic reciprocity, and later on still you need them to not have quadratic reciprocity.

You could be tempted to declare $p$ and $q$ are any positive odd primes, ぱ、く are any primes in $\mathbb{Z}$, パ、ク are primes such that one is one more than a multiple of the other, $\hat{p}$ and $\hat{q}$ have quadratic reciprocity, ب and خ don't have quadratic reciprocity. You declare them at the beginning of the book and then use them without further explanation.

I'm sure these could be made a little less arbitrary, but if you have to read such a book, it's gonna get pretty damn tiresome to keep having to refer to the list of symbols at the beginning. It is better, in my opinion, to redefine the symbols at each theorem or other logical division of your document.

It might also help to think of it as somewhat analogous to accidentals in music notation: you have E$\flat$ in measure 20, no E's of any kind for ten bars then an E with no accidental next to it. Is it supposed to E$\flat$? Probably not. If there was only one intervening barline, you could reasonably think the composer actually meant E$\flat$ again. But ten barlines is quite enough to cancel, I think.

So if in Theorem 2.1 the author uses $p$ and $q$ to mean primes with quadratic reciprocity, then pages later in Theorem 2.7 he uses $p$ and $q$ just to mean "primes," I would think quadratic reciprocity is now no longer a requirement.

Font variations can help generate distinct symbols that still bear some recognizable relation to other symbols. $P$ and $Q$ could be specific sets of primes, $\mathcal{P}$ and $\mathcal{Q}$ could be the products of primes in those sets, $\mathbb{P}$ is the set of all primes in $\mathbb{Z}$, $\mathfrak{P}$ and $\mathfrak{Q}$ are ideals generated by those primes, etc.

But even going this route it's possible to get carried away. Better to use a few symbols judiciously than a lot of symbols recklessly.

And one last point: even within the Latin and Greek alphabets, we limit ourselves further. $l$ is barely used. Nor do certain Greek letters get much play here because of their similarity to Latin letters, e.g., compare the uppercase $A$ to the uppercase alpha. The potential for confusion multiplies in the broader Unicode palette: is ム from katakana, bopomofo or CJK unified ideographs? You can't tell just by looking at it.

8
On

My high school teacher used something like $f(陳)=陳^{2}+2陳+1$ for illustration of dummy variables. It's a matter compatibility and liquidity of symbol usage. It all depends on the background of the readers. For instance, some people don't know what is $\text{cis } \theta$. Chinese texts don't follow the Western terminology of Pythagoras' theorem, Pascal triangles, Pythagorean triples and they use "勾股定理","楊輝三角形" and "勾股數" instead. Whereas Japanese also uses "三平方の定理" for Pythagoras' Theorem.

enter image description here

Japanese literature on tabulating binomial coefficients and Bernoulli numbers appears in "Hatsubi-Sampo" (発微算法) written by Seki Takakazu(関 孝和)

8
On

There isn't a single answer, but:

  • Unicode isn't useful when you're writing notes by hand or at a board.
  • We want symbols to be distinguished from running text but still be familiar and play nicely with other symbols. Unicode symbols that are just Latin letters with a decoration or glyphs from languages very few mathematicians are familiar with are not particularly useful.
  • We do use other symbols: Cyrillic Ш for the Tate-Shafarevich group of an elliptic curve, Hebrew $\aleph$ for various cardinals, $\pitchfork$ for transverse intersection, whatever $\wp$ is for the Weierstrass function, $\sharp$ and $\flat$ for isomorphisms of the tangent and cotangent bundles in Riemannian geometry, etc.. But:
  • We don't use other symbols that often because using too many of them makes text hard to read. Math isn't computer programming, and math symbols aren't typed or have rigid syntax; ultimately, the goal for a math paper is to have other humans read it.
  • As such, overloading symbols isn't necessarily a problem. It's hardly confusing to use $e$ to denote both the number $2.7182\cdots$ and the identity element of a group. We could introduce a unique symbol for the latter (not $1$, presumably), but how would that be any easier?
  • In the opposite direction, there are a dozen kinds of integrals denoted by the $\int$ symbol, but they're all more or less the same kind of object, so it's not taxing to the reader. The symbol $+$ denotes addition on a vector space, in a module, in a ring, of matrices, in a field, etc.; but it's all ultimately the same kind of operation.
0
On

Math was written in plain text for most of the human history. Everything was spelled out in written statements, including addition and multiplication. It was clumsy, extremely hard to manipulate and probably one of the reasons some algorithms progressed more slowly. It was in renaissance when first symbols were introduced (equality sign, plus and minus,...), and in a couple of centuries, math was mostly writen symbolically (still, vector notation took some time, Maxwell still wrote his famous equations in incredibly convoluted form). In that time, scholars were largely writing in latin, and were under strong influence of ancient texts (mostly from greek mathematicians, although arabic math was also prominent). So greek and latin symbols were a natural choice.

I think nowadays we don't introduce new symbols from other alphabets because the world is pretty anglo-centric. For instance, even german conventions for physical quantities ($A$ for work, ...) which are taught in schools and used in most of continental Europe, are falling out of favor. It's hard even to get publishers and other authors to spell last names of authors right. Even when a non-ascii symbol would be a nice abbreviation for something in other languages, it's rarely used seriously (more like a "joke" in a classroom, especially for dummy variables, when you want to stress that it doesn't matter what you call it). The goal is uniformity - if we all use the same notation, then math transcends language and we can read anything any other scientist wrote (which is, beside the musical notation, one of the rare examples of universally understood "language").

New symbols are introduced, when a new theory/derivation/quantity is introduced, and it's up to the original author to set the notation and naming (which is quite a big deal, actually). So, if a russian scientist discovers something, he may as well use cyrillic letters, nothing wrong with that. Of course, americans will scoff, but that's nothing new. Still, most people nowadays don't even think of that, because education system makes you used to the latin/greek based notation, so there's a strong persistence from generation to generation about the preferred use of symbols.

One of the "problems" (even with greek letters) is still text encoding on computers. Even though unicode should have become a completely universal standards more than a decade ago, most people (again, USA comes to mind) don't bother changing settings from default western encoding, and most webforms still claim other letters are "illegal characters" (even online submission forms for scientific papers). This is one of the reasons we still see u instead of μ for micro- and capital latin letters instead of greek lowercase for triangle angles.

However, there are some very disturbing opposite examples. For instance, SI system of units and measures defines the symbols not as abbreviations but as pure symbols - they aren't subject to grammar rules and translation. But almost universally, you see cyrillic г instead of g for grams on food packaging in Serbia, in clear violation of the rules. Must be national pride or something, but would lead to very ridiculous ambiguities, if they ever applied this translation to milli- and micro- (making them both the same symbol).

0
On

Because we don't want to get hung up on notation before we can be understood. What if I wanted to use $\tilde{n}$ or $\hat{o}$ as variables? Is the latter an arbitrary unit vector rather than an arbitrary variable? Does the former mean some kind of morphism or equivalence? Things can get problematic just by deviating a tiny bit from the Latin and Greek alphabets.

0
On

I read Works of Hoene Wronsky (i.e. the inventor of the Wronskian among other things). He used hebraic letters for functions. I guess was for is interests in Kabbalah...