I understand that $0.09 + 0.01$ in floating-point systems cannot be exactly represented as $0.1$ because the binary equivalent of $0.1$ does not exist—it has infinitely repeating bits.
But why can a mathematically equivalent statement like $0.085 + 0.015$ be exactly represented? You can readily verify this apparent contradiction in a language like Python or JavaScript:
Why does this happen?

Lets look more closely at these numbers and how their binary approximation changes them
So it can be seen that the representation is closer and above for $0.1$ than the distance below for $0.09$. Apart from that, the result of the addition is rounded down to the next available floating point number below the representation for $0.1$.
In the second example the representation error are distributed the other way around and the rounding of the result goes in the "right" direction towards the representation of $0.1$.
In total, in the addition of two floating point numbers there is always the chance for an error in the last bit of the mantissa. This is a relative error, for the absolute error you have to scale with the sum itself.