On page 150 of the book Introduction to Mathematical Statistics, there's the following passage:
Let the symbol $o(h)$ represent any function such that $lim_{h→0}[o(h)/h] = 0;$ for example, $h^2 = o(h)$ and $o(h) + o(h) = o(h)$.
I can understand how $h^2 = o(h)$ fits the description. But I couldn't understand the function $o(h) + o(h) = o(h)$.
Big- and little-O notation can be confusing because it violates everything we know about using variables in equations, since each instance of $o(h)$ refers to a (potentially) different function and the quantification (i.e. 'for all' vs. 'there exists') has to be inferred from context.
When we write an equation involving $o(h)$ terms, you should read it as saying that each $o(h)$ can be replaced by $f(h)$ for some function $f$ satisfying $\lim_{h \to 0} \frac{f(h)}{h} = 0$, and that the function $f$ might be different each time.
So in your case, writing $o(h)+o(h)=o(h)$ means:
(Note that $f$ and $g$ are universally quantified and $k$ is existentially quantified.)
Proving this statement is now easy using elementary facts about limits of functions: just define $k(h)=f(h)+g(h)$ and prove that $\lim_{h \to 0} \frac{k(h)}{h} = 0$.