$1.96\sin(149^\circ) + 1.00842\sin(203^\circ) + 0.61446\sin(285^\circ) = 0.02193075901$
But if I calculated each of the terms separately, then add them together, I get a result that is a tiny bit different.
$1.96\sin(149^\circ) = 1.009474627$
$1.00842\sin(203^\circ) = -0.3940210846$
$0.61446\sin(285^\circ) = -0.5935227832$
$1.009474627 - 0.3940210846 - 0.5935227832 = 0.0219307592$
The difference between the two answer is tiny:
$0.0219307592 - 0.02193075901 = 1.8903 \times 10^{-10}$
But I'm curious why are they different? I don't think I made an arithmetic mistake or any sort of logic mistake in my process.
That's just rounding issues in your software ...
The main point is that, given that all numbers are encoded using a fixed maximum number of bits, every system has a smallest encodable $\epsilon > 0$ (link). Therefore all calculations involving numbers close to $\epsilon$ or whose difference is close to $\epsilon$ will generate noticeable rounding errors. Most often you just don't see them because your system is "smart enough" to hide them from you using nice-looking roundings.