Rationale behind truth values

1.6k Views Asked by At

I originaly asked a question on Programmers.SE to know why $0$ was consider $\text{false}$ and all the other [integral] values were considered $\text{true}$. That was a huge debate and many said it was a legacy from Boolean algebra where $0$ is indeed $\text{false}$ and $1$ is $\text{true}$.

Somebody suggested I go further and ask here why this is actually the case in Boolean algebra. So here is the question: what is the rationale for $0$ to be $\text{false}$ and $1$ to be $\text{true}$ and not the other way around in Boolean algebra?

3

There are 3 best solutions below

4
On BEST ANSWER

The numbers you use essentially don't matter. But if you want to represent your $2^4$ truth functions (see Wikipedia) using arithmetic, then $0,1$ come in handy. This is because their properties of being the additive and multiplicative neutral element simplifies some computations.

You can represent the functions using any numbers, really. If $a$ can be a number representing $\text{true}$ or another number representing $\text{false}$, then

$$\text{NOT}(a):=\text{true}+\text{false}-a$$

works out for defining the negation. For example

$$\text{NOT}(\text{true}):=\text{true}+\text{false}-\text{true}=\text{false}.$$

Here follows a nice graphic showing all the general constructions. As examples, the use of $\{0,1\}$ and also $\{-1,1\}$ is demonstated. Notice how using $0$ "for $\text{false}$" eliminates all the terms involving the number $s_0$, making the $\{0,1\}$ column specifically short and simple for computations.

enter image description here

1
On

I believe this has to do with boolean algebra where or is considered as plus operation and and is considered to be multiplication. If we assign false to 0 and true to 1 then the usual arithmetic works.

3
On

I wouldn't be surprised if a version of this convention goes back to Boole himself, in his algebra of classes. I believe he used $0$ for the empty class and $1$ for the class of "everything". (This was before the set-theoretic paradoxes made people queasy about the class of everything.) Under the natural correspondence between classes and functions to "true" and "false" (where a class $C$ corresponds to the function sending elements of $C$ to "true" and "everything" else to "false"), these would be the constant "false" function for $0$ and the constant "true" function for $1$. So it was natural and convenient to identify the truth values with these numbers.