Converting decimal fractions to binary

881 Views Asked by At

I know that if we multiply the fraction by 2 repetitively and take out the integer part every time, we will get the binary form. But why does this method work? Why should we multiply by 2 for the fractional part(since the general procedure to convert decimal to binary is to divide by two)?

1

There are 1 best solutions below

0
On BEST ANSWER

Let the decimal be $0.d_1d_2d_3...$. When we are trying to express it in binary, we want to express it as $\frac{a_1}{2} + \frac{a_2}{2^2} + \frac{a_3}{2^3} + ...$ Now, if the original fraction is more than $\frac{1}{2}$, we want $a_1 = 1$, otherwise $a_1 = 0$. The best way to check that is to as to multiply LHS by $2$. If you get an integer part 1, put $a_1 = 1$, otherwise $a_1 = 0$. We have taken care of the first digit. The same process is continued. You can either drop the integers on the LHS multiplications, or keep them and put a '1' when they switch to odd. Hope this helps as an intuitive answer.