Why should we subtract 1 to get maximum number in bits

3.3k Views Asked by At

I'm reading this article and it says that:

This means that an unsigned INT can go up to $4,294,967,296$ (which is $2^{32}$ $– 1$). You need to subtract one because the result of $2^32$ starts from $1$, while the first binary representation is $0$.

I have hard time understanding what is meant by the last sentence, can someone please explain?

1

There are 1 best solutions below

0
On

When you count up to $2^{32}$, you start counting $1$, $2$, ... then the $2^{32}$th number is $2^{32}$. Since the computer has to store the number $0$ in an unsigned int, it is actually starting to count with $0$, then $1$ and so on. That means that the $n$th number for the computer is, in fact, $n-1$. Hence, the biggest number it can store, the $2^{32}$th number is $2^{32}-1$