I am supposed to prove that $\frac 1{10}$ is not representable as a finite binary float. I tried proving this via induction but that did not seem to work, now I am out of ideas. Thank you
2026-03-26 12:33:49.1774528429
Proof that $\frac 1{10}$ has no finite binary float representation
77 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
2
A base-$b$ numeral with $k$ digits after the
decimalradix point represents a fraction of the form $\frac{m}{b^k}$, where $m$ is an integer. For example, in familiar base-ten:$$0.04 = \frac{4}{10^2} = \frac{1}{25}$$ $$123.456 = \frac{123456}{10^3} = \frac{15432}{125}$$
The same logic applies to binary numerals ($b = 2$). If the fraction $\frac{1}{10}$ had a finite decimal representation, there would exist integers $m$ and $k$ such that:
$$\frac{1}{10} = \frac{m}{2^k}$$
Or, equivalently:
$$2^k = 10 m$$ $$2^{k-1} = 5m$$
Which would require that some power of 2 be a multiple of 5. But since 2 isn't divisible by 5, this is impossible.
Instead, you can only write $\frac{1}{10}$ as a repeating binary fraction.
$$0.1_{10} = 0.0\overline{0011}_2 = 0.00011001100110011..._2$$
Or round it off to some finite number of binary digits.
$$0.1_{10} \approx 0.0001100110011001100110011001100110011001100110011001101 = \frac{3602879701896397}{2^{55}} = \frac{1}{10} + \frac{1}{2^{55}\times5}$$
This is the value that your computer actually stores if you write
0.1using standard IEEE 754 double-precision. The rounding error is infinitesimal, but often confuses newbie programmers. For example, in Python: