Differences in two definitions of little-oh notation

38 Views Asked by At

There are two definitions I commonly see of little-oh notation, and while they seem to be synonymous at first glance, there is one technicality that continues to bug me: their behavior at $x = x_0$. The definitions are

  1. $f(x) = o(g(x))$ as $x \rightarrow x_0$ if and only if for all $\epsilon$ there exists a $\delta$ such that $|x - x_o| < \delta$ implies $\dfrac{|f(x)|}{|g(x)|} < \epsilon$.
  2. $f(x) = o(g(x))$ as $x \rightarrow x_0$ if and only if $\lim_{x \to x_0} \dfrac{f(x)}{g(x)} = 0$.

As you can see, these are not the definitions that commonly pop up in a web search, which usually deal with what it means for a function to be $o(g(x))$ as $x \rightarrow \infty$. But my issue is specific to this formulation.

Definition (2) stipulates (by the definition of the limit of a function) that for any $\epsilon$ and its corresponding $\delta$, $0 < |x - x_0| < \delta \implies \dfrac{f(x)}{g(x)} < \epsilon$. Definition (1), on the other hand, does not exclude zero from this requirement.

This seems to have momentous consequences, namely, that the definition (1) requires that $f(x_0) = 0$ but definition (2) does not. Which is the correct definition?

I got these two definitions from The Way of Analysis, By Robert Strichartz (pg 147).