I have been solving some exercises in Apostol, where he proves the asymptotes of the hyperbola. And I got the following question. When proving that the hyperbola approaches the asymptotes I used the definition of the equivalence:
$$\lim\limits_{x \rightarrow \infty} \frac{f(x)}{g(x)} = 1$$
The intuition tells me that since $\exists r$, s.t. $\lvert \frac{f(x)}{g(x)} - 1 \rvert < \epsilon, \forall x>r$, the functions are essentially equal.
However, Apostol shows the result differently, using the limit of the difference:
$$\lim\limits_{x \rightarrow \infty} f(x) - g(x) = 0$$
What is the difference between the two approaches? I tried to prove the equivalence of statements, but I could not transform $\lvert \frac{f(x)}{g(x)} - 1 \rvert < \epsilon$ to $\lvert f(x)- g(x) \rvert < \epsilon$ easily.
Can someone show this equivalance, or tell me what is wrong? My intuition tells me that there is no difference between approaches, and pretty much each one shows that two functions are the same as x grows.
If these are different statements, why Apostol chose the latter approach?
This is the difference between relative error and absolute error. Apostol shows that the absolute error between using one function to approximate the other is small, for instance, is eventually always less than $1$. You have shown that the relative error in doing so is small, for instance, is eventually always less than $1\%$ of the value of the function. The two are not equivalent in general.
Let's see how these compare in two scenarios.
So the implication you are wanting between the two methods of measuring error doesn't exist in either direction, without more information about the various functions.
The problem with relative error is that it doesn't have to decrease for rapidly growing functions. Consider $2^x + x$. Its growth compared to $2^x$ is $$ \frac{2^x + x}{2^x} = 1 + \frac{x}{2^x} \xrightarrow{x \rightarrow \infty} 1 $$ but $$ (2^x + x) - (2^x) = x \xrightarrow{x \rightarrow \infty} \infty \text{.} $$ The error is small compared to the size of the function, so could be a negligible correction compared to the size of the function, but the error is eventually larger than any pre-specified bound on "small".