I'm not sure if this is a maths question or a programming question or a how-does-your-computer-work question. Sorry about that.
I remember from university that 0.999999 ... == 1 since 1 - 0.999999 ... == 0.00000 ... == 0
But then this is non-trivial in programming languages.
In python3, 0.4-0.3 == 0.10000000000000003 and 1.4 - 1.3 == 0.09999999999999987 and so they are not in fact equivalent. R works slightly differently. And so on.
If I wanted to understand this, where should I start?
I'm guessing this is because of how floating point works on computers. Some numbers are infinitely repeating when written in binary, which is what the computers use, so there's bound to be an error in the calculation, unless you use powers of 2.