In Ross's Real Analysis text, he proves that if $f$ and $g$ are integrable over $[a,b]$, then $(f+g)$ is also integrable over $[a,b]$ and $\int_a^b (f+g) = \int_a^b f + \int_a^b g$.
I understand everything until the last step of the proof. Ross strings together some inequalities to conclude: $$\int_a^b (f + g) < \int_a^b f + \int_a^b g + \epsilon$$ and $$\int_a^b (f+g) > \int_a^b f + \int_a^b g - \epsilon$$ Thus, $$\int_a^b f + \int_a^b g - \epsilon < \int_a^b (f+g) < \int_a^b f + \int_a^b g + \epsilon$$ He then concludes from this $$\int_a^b (f + g) = \int_a^b f + \int_a^b g$$ I don't understand how this conclusion follows. He makes several similar arguments, but various manipulations, such as adding $\epsilon$ to every term in the equality, doesn't quite get me there, and the strict inequalities doesn't seem to allow an argument of the kind $x \leq a \wedge x \geq a \implies x = a$.
I would greatly appreciate any help with this.
It is because \begin{align*} -\epsilon<\int(f+g)-\left(\int f+\int g\right)<\epsilon, \end{align*} so \begin{align*} \left|\int(f+g)-\left(\int f+\int g\right)\right|<\epsilon, \end{align*} and you have probably known that $|a|<\epsilon$ for every $\epsilon>0$ implies that $|a|=0$, and hence $a=0$.