Why is $0.09 + 0.01$ not exactly representable as $0.1$ in floating-point systems but $0.085 + 0.015$ is?

428 Views Asked by At

I understand that $0.09 + 0.01$ in floating-point systems cannot be exactly represented as $0.1$ because the binary equivalent of $0.1$ does not exist—it has infinitely repeating bits.

But why can a mathematically equivalent statement like $0.085 + 0.015$ be exactly represented? You can readily verify this apparent contradiction in a language like Python or JavaScript:

enter image description here

Why does this happen?

2

There are 2 best solutions below

0
On BEST ANSWER

Lets look more closely at these numbers and how their binary approximation changes them

def print17(x): print("%.17e between %.17e and %.17e"%(x,x-2**-53*x,x+2**-53*x))
a,b,c = 0.09, 0.01, 0.1
print17(a); print17(b); print17(a+b); print17(c)
>>>  8.99999999999999967e-02 between 8.99999999999999828e-02 and 9.00000000000000105e-02
>>>  1.00000000000000002e-02 between 9.99999999999999847e-03 and 1.00000000000000019e-02
>>>  9.99999999999999917e-02 between 9.99999999999999778e-02 and 1.00000000000000006e-01
>>>  1.00000000000000006e-01 between 9.99999999999999917e-02 and 1.00000000000000019e-01

So it can be seen that the representation is closer and above for $0.1$ than the distance below for $0.09$. Apart from that, the result of the addition is rounded down to the next available floating point number below the representation for $0.1$.

In the second example the representation error are distributed the other way around and the rounding of the result goes in the "right" direction towards the representation of $0.1$.

a,b,c = 0.085, 0.015, 0.1
print17(a); print17(b); print17(a+b); print17(c)
>>>  8.50000000000000061e-02 between 8.49999999999999922e-02 and 8.50000000000000200e-02
>>>  1.49999999999999994e-02 between 1.49999999999999977e-02 and 1.50000000000000012e-02
>>>  1.00000000000000006e-01 between 9.99999999999999917e-02 and 1.00000000000000019e-01
>>>  1.00000000000000006e-01 between 9.99999999999999917e-02 and 1.00000000000000019e-01

In total, in the addition of two floating point numbers there is always the chance for an error in the last bit of the mantissa. This is a relative error, for the absolute error you have to scale with the sum itself.

0
On

It's not that they are exactly represented. All of the floating point numbers are approximated to a certain error margin, and some calculations do not give rounding errors, while others do.

Consider the following primitive example:

$0.54 + 0.54 = 1.08$

If we rounded these numbers first, we get

$0.5 + 0.5 = 1.0 \ne 1.1$

The more digits we use, the more precision we have, and the lower the chance that rounding errors occur (sorta) and the lower the margin of these errors. However, these errors will always exist no matter how we round.

Here's a more detailed post outlining how floating point numbers work. https://stackoverflow.com/questions/588004/is-floating-point-math-broken