If $f$ is $O(g)$ over some base, this means that $f(x) = \beta(x)g(x)$, where $\beta$ is eventually bounded. So this means that eventually, $f$ is at most $c$ times $g$, where $c$ is some constant.
But I thought $O$ was a way of expressing that two functions are eventually approximately the same. In that case, this seems like a poor definition. If $g$ is unbounded, then the difference between $f$ and $g$ is also unbounded, so the two functions might be $O$ of one another and yet still diverge wildly.
I would have used an additive definition: $f(x)=g(x)+\beta(x)$, where $\beta$ is eventually bounded. So then we know that $f(x)=g(x)\pm \epsilon$, where $\epsilon$ is some positive constant. We now have a statement on the error incurred by assuming $f=g$.
Similar objections can be raised against the definition of asymptotic equivalence. $x^2\sim x^2+x$ as $x\rightarrow+\infty$ even though their difference grows without bound. So in what sense are they equivalent?
To appreciate these comparisons we have to look at some actual numbers. Let's take the case of $x^2+x$ vs. $x^2$. Let's suppose a calculation takes a millisecond and we have one algorithm $A_1$ that takes $x^2$ calculations to complete a task and another algorithm $A_2$ that takes $x^2+x$ calculations to complete the same task where $x$ is the problem size.
If $x=1,000,000$ then $A_1$ takes $10^{12}$ milliseconds which is $10^9$ seconds which is about $31.7$ years. $A_2$ takes $10^{12} + 10^6$ milliseconds which is $10^9 + 10^3$ seconds and since $10^3$ seconds is $16$ minutes and $40$ seconds $A_2$ takes about $31.7$ years and an additional $17$ minutes. Is that significant?