With 8 bits is it possible to obtain an integer in more than one way?

70 Views Asked by At

This is just a curiosity that just came to my mind while thinking at IP addresses. A byte is composed of 8 bits. A bit can either be $0$ or $1$. IPv4 addresses are composed of a group of 4 bytes. These bytes are usually represented in decimal dot notation (because it's easier for humans), but sometimes I need to figure out in my head the binary numbers. So, if I know that there's just one configuration, and if my calculations are correct, then I am sure that my solution is correct.

My question is, is possible to obtain a certain integer $k$ using more than one combination of the $8$ bits? For example, suppose we want to represent in binary the number $123$. Is it possible to represent it in 2 or more different configurations of the 8 bits? What about if we have more than $8$ bits?

This might seem a silly question, but a proof might satisfy me.

2

There are 2 best solutions below

2
On

No.

If we use $n$ bits to represent unsigned integers in the range $0,1,\ldots, 2^n-1$ then each bit pattern corresponds to exactly one of these integers and vice versa.

Assume that some integer $k\ge 0$ allowed two distinct patterns of $n$ bits, where $2^n>k$. Among those integers we let $k$ be the minimal one. And among all matching numbers of bits, we let $n$ be minimal.

The least significant bit is set iff $k$ is odd; hence both patterns certainly agree on that bit. If $k$ is odd, then replacing the least bit $1$ with a $0$ produces a representation for $k-1$, so $k-1$ (which is also $\ge 0$ if $k$ is odd) would also have two distinct patterns. As this contradicts minimality, $k$ must be even. But then we get two distinct $n-1$ bit patterns for $k/2$ by striking off the least significant bit! Thus if $k>0$, this property of $k/2$ (which is $<2^{n-1}$) contradicts the minimality of $k$. And if $k=0(=k/2)$ we get a contradiction to the minimality of $n$, unless $n=0$. But with no bits at all there is only one bit pattern anyway ...

0
On

No, each nonnegative integer has one and only one binary representation. (That is, if we don't care about representations that only differ in how many leading zeroes they have).

Suppose you have two different bit strings and we want to prove that they represent different integers. Let's look at the leftmost position where the two bit strings differ; there one of them will be 1 and the other will be 0, so the strings will be something like abcd1pqrs amd abcd0wxyz.

Then, from how binary representation works, we know that the value of abcd1pqrs is at least the same as abcd10000. We also know that the value of abcd0wxyz is at most the same as abcd01111. If we can show that abcd10000$>$abcd01111, then the two original strings can't represent the same number.

Spelled out with decoding each bit string, this is the same as $$ a\cdot 2^{n+4} + b\cdot 2^{n+3} + \cdots + d\cdot 2^{n+1} + 2^n > a\cdot 2^{n+4} + b\cdot 2^{n+3} + \cdots + d\cdot 2^{n+1} + 2^{n-1}+\cdots+2^1+2^0 $$ The terms with $a$ through $d$ are the same on both sides, so we can subtract them away to get $$ 2^n > 2^{n-1}+\cdots+2^1+2^0 $$ However it happens that $2^{n-1}+\cdots+2^1+2^0 = \frac{2^n-1}{2-1} = 2^n-1$, by the theory of finite geometric progressions, so the inequality we need to establish is $$ 2^n > 2^n-1 $$ which is obviously true.