Suppose you wanted to write the number 100000. If you type it in ASCII, this would take 6 characters (which is 6 bytes). However, if you represent it as unsigned binary, you can write it out using 4 bytes.
(from http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/BitOp/asciiBin.html)
My question: $\log_2 100,000 \approx 17$. So that means I need 17 bits to represent 100,000 in binary, which requires 3 bytes. So why does it say 4 bytes?
This is more of a computer science/engineering question than a math question.
Look at http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Data/unsigned.html. It asks you to "assume that a typical unsigned int uses 32 bits of memory." Programming languages and processors usually use an even number of bytes to represent data.