Representing a binary number

648 Views Asked by At

Suppose you wanted to write the number 100000. If you type it in ASCII, this would take 6 characters (which is 6 bytes). However, if you represent it as unsigned binary, you can write it out using 4 bytes.

(from http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/BitOp/asciiBin.html)

My question: $\log_2 100,000 \approx 17$. So that means I need 17 bits to represent 100,000 in binary, which requires 3 bytes. So why does it say 4 bytes?

3

There are 3 best solutions below

0
On BEST ANSWER

This is more of a computer science/engineering question than a math question.

Look at http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Data/unsigned.html. It asks you to "assume that a typical unsigned int uses 32 bits of memory." Programming languages and processors usually use an even number of bytes to represent data.

2
On

As Joey tels you, the reason is that numbers are usually stored in the data type "integer", which (almost) always comes in 32 bit variants. The processor is taylormade to add/subtract/multiply integers of exactly this size, otherwise, you'll need 32*32*(number of operations) different circuits for every combination of number of bits, which is a huge waste of space.

3
On

You can, in fact, write it out using three bytes. My current project uses 3-byte integers extensively, to save memory in an embedded system.