A couple of years ago, I asked my high school math teacher the following question and she couldn't give me an answer. I had since forgotten about it, but now I'm curious to find the answer again.
Let's say I have the number 1.49. Now, obviously rounding this to the nearest integer would yield the result of 1. But if I were to first round it to the nearest tenths place, I'd get 1.5, and rounding that results in 2. Even as someone who's not very well-versed in Mathematics, I can tell you that this is incorrect. My question is why? What's wrong with rounding this way. I get the practical reasons, like this is increasing the margin of error for something like taxes, but are there any mathematical reasons as to why this is incorrect?
When you round to the nearest integer you guarantee that the correct value is within $\pm 0.5$ of your rounded value. You have demonstrated that collapsed rounding violates this.