Why have we chosen our number system to be decimal (base 10)?

90.9k Views Asked by At

After learning about the binary number system (only 2 symbols, i.e. 0 and 1), I just thought why did we adopt the decimal number system (10 symbols) after all?

I mean if you go to see, it's rather inefficient when compared to the octal (8 symbols) and the hexadecimal (16 symbols)?

6

There are 6 best solutions below

9
On BEST ANSWER

Expanding on the comment by J.M., let me quote from the (highly recommended) book by Georges Ifrah The Universal History of Numbers (Wiley, 2000, pp. 21-22):

Traces of the anthropomorphic origin of counting systems can be found in many languages. In the Ali language (Central Africa), for example, "five" and "ten" are respectively moro and mbouna: moro is actually the word for "hand" and mbouna is a contraction of moro ("five") and bouna, meaning "two" (thus "ten"="two hands").

It is therefore very probable that the Indo-European, Semitic and Mongolian words for the first ten numbers derive from expressions related to finger-counting. But this is an unverifiable hypothesis, since the original meanings of the names of the numbers have been lost.

Ifrah then goes on to explain that

...the hand makes the two complementary aspects of integers entirely intuitive. It serves as an instrument permitting natural movement between cardinal and ordinal numbering. If you need to show that a set contains three, four, seven or ten elements, you raise or bend simultaneously three, four, seven or ten fingers, using your hand as cardinal mapping. If you want to count out the same things, then you bend or raise three, four, seven or ten fingers in succession, using the hand as an ordinal counting tool.

3
On

Because it makes the metric system so much simpler :).

7
On

I think the answer here might be, that the guys who thought base 10 was a good idea had the largest sticks.

If one trusts the wikipedia, the Babylonians had a base 60 system, which can still be felt today with this "60 minutes in an hour" nonsense, and a (related) base 12 system was widely in use too. There are still unique words for "eleven" and "twelve", as well as expressions as "a dozen". After all, you can count to twelve using a single hand.

Then, there was the base 1 latin system, and (wikipedia again) a base 20 system for the mayan.

Something as easy as "base 10 is natural for humans" does not explain it all. =)

7
On

It is believed that the decimal system evolved mainly due to anthropomorphic reasons (5 digits on each hand) and is thought to be a simplification of the Babylonian sexagesimal (base 60) counting method.

To make this analogy precise, note that the normal hand has 4 fingers (excluding the thumb) with 3 segments, along with 5 digits on the other hand to be used as segment pointers. This gives 3 x 4 x 5 = 60 unique configurations.

4
On

I don't believe you understand the notion of efficiency in terms of encoding. Informally speaking, you have to keep it mind there are two factors involved: (i) cost of having different symbols (in case of base 10 there as 10 different symbols, in case of base 16 there are 16 different symbols etc) and the length of the resulting string to encode a particular number.

When you consider both factors and apply some basic information theory to it, the answer may look a bit surprising: the most efficient encoding has a base $e$ (yes, that very $e = 2.718\dots$). Since we'd rather have some natural number as a base, the best we can get is base 3, and the next is base 2.

So, why, then, computers use base 2 (0 and 1) rather than base 3 (say, -1, 0, and 1)? The answer is that it is simple to design the circuits that distinguish between two (rather than three) states. (I do remember reading some of the earlier computers did use base 3, but I can't recall all the details.)

Now, with respect to octals and hexes, those are simply convenient ways to record the binary strings. If you did some machine-level debugging, you probably had a chance to read what's known as "hexadecimal dump" (contents of a memory). Surely it's easy to read than if it were written as binary dump. But what's lurking underneath of that is base 2.

(The answer on "why do we use base 10" has been answered elsewhere.)

0
On

I am tempted to answer "for the same reason as this forum is in English" - ie human convention for effective communication and calculation. However there is another anthropomorphic aspect to this, in that there are advantages for a high base (compact encoding of numbers) and for a low base (smaller number of addition/multiplication facts to learn, fewer number 'symbols' to recall and write distinctly without confusion).

Binary and binary related computations are used in computing because it was technologically easier to encode '0' and '1' than to work with a higher base than 2, and computing conventions were created when computing resources and speed had to be optimised. The available length of string then restricts the size of number which can be stored or manipulated. Many of these reource constraints no longer exist in the same way (my computer has more capacity than I generally need).

So I think there is some form of rough optimisation with base 10, given the recall and ability of human beings, this was a good compromise. And we do not always use it when there is an advantage to be had in using another. And note that the Octal and Hexadecimal representations within computing are the ones closest to base 10 ....