I am having trouble understanding the usage of big-O and little-o notation when $x \rightarrow 0$ rather than $x \rightarrow \infty$. Most definitions i have found online pertain to the case of $x \rightarrow \infty$.
One definition given in a university course on mine is:
Big-O: $f(x) =O(1)$ if as $x\rightarrow \infty$ we have $|f(x)|\leq C$ for some constant $C$, implying that $f(x) = O(g(x))$ if $f(x)/g(x)=O(1)$.
Little-o: $f(x) = o(1)$ if as $x\rightarrow \infty$ we have $f(x)\rightarrow 0$, implying that $f(x) = (g(x))$ if $f(x)/g(x)=o(1)$ (i.e. $f(x)/g(x))\rightarrow 0$ as $x\rightarrow \infty$)
Are these definitions also applicable if i just replace $\infty$ with $0$? I have realized that when $x\rightarrow 0$ and we use O-notation we would keep the lowest order terms as opposed to when $x\rightarrow \infty$ where we keep higher order terms. This however have lead me to conclusions that is find questionable.
Consider the following for example:
let $x\rightarrow 0$ and $f(x) = x^n$. We have $|f(x)/x^n|=1\Rightarrow f(x) = O(x^n)$. Similarly we have $|f(x)/x^{n-1}|=x\rightarrow 0 \leq C$ for any non-negative $C$. Does this mean that if $f(x)=O(x^n)$ then we also have $f(x)=O(x^{n-1})$, $f(x)=O(x^{n-2}),...,f(x)=O(x)$. Moreover it would also seem that since $|f(x)|\rightarrow 0$ as $x\rightarrow 0$ we always have $f(x) = o(1)$.
I hope the example demonstrates how I might have misunderstood this concept. Most of all I think I am just in the need of a clear definition of for big-O and little-o notation in the case where $x\rightarrow 0$.
Any help or references would be greatly appreciated!
The "big O" and "little o" notation can be defined for functions relative to any point $a$ on the "extended real line" $[-\infty,\infty] = \mathbb R\cup \{\pm\infty\}$: you just need to know what it means to say $x\to a$. The idea of the definitions are to compare the growth of functions defined near $a$, but perhaps not at $a$ itself. The condition that $f$ is $O(g)$ at $a\in [-\infty,\infty]$ is that $|f(x)|$ remains bounded relative to $|g(x)|$ as $x \to a$, that is, there are positive constants $\delta$ and $C$ such that $$|f(x)|<C.|g(x)|, \quad \forall x \text{ satisfying } 0<|x-a|<\delta. $$
The condition that $f$ is $o(g)$ at $a \in [-\infty,\infty]$ is essentially that $|f(x)|$ is of ``lower order'' than $|g(x)|$ as $x\to a$, so that relative to $|g(x)|$ it is negligible, that is, for any $\epsilon>0$, there is a $\eta>0$ such that $|f(x)|<\epsilon.|g(x)|$ for all $x$ satisfying $0<|x-a|<\eta$, or equivalently $\lim_{x \to a} |f(x)|/|g(x)| =0$.
The case you focus on is when you take $a=0$, and you are correct that $f(x)=x^n$ is $o(x^{n-1})$ and $O(x^n)$ (indeed any function $f$ will have the property that it is $O(f)$, i.e. "big-O" of itself). In general, taking $a=0$ you have $x^m$ is $o(x^n)$ if $m>n$ (and $O(x^n)$ when $m=n$ as we already noted). This is the opposite way around to what happens if you take $a=+\infty$, but hopefully that makes sense: if you take, say, $f(x) = x^5+4x^2+1$, then for large $x$, the term $x^5$ dominates, whereas for $x$ very close to $0$, the values of $f(x)$ will all be very close to $1 =x^0$.
It is probably a failing of notation that we do not write something like "$f$ is $O_a(g)$", or "$f$ is $o_a(g)$'' to make it clear that the notion depends on the point you are interested in, but the convention is so established that I think we just have to live with it!