In big-O notation, $f(x) = O(g(x))$ as $x\rightarrow \pm\infty$ if
$$\exists C, \delta>0: \forall |x| \geq \delta: |f(x)| < C |g(x)|$$
and, for the case I'm more interested in here, $f(x) = O(g(x))$ as $x\rightarrow 0$ if
$$\exists C, \delta>0: \forall |x| \leq \delta: |f(x)| < C |g(x)|$$
In little-o notation, $f(x) = o(g(x))$ means
$$\lim_{|x|\rightarrow0}\frac{f(x)}{|g(x)|} = 0$$
I've come across big-O before, but this is the first time I've seen little-o. I have experience in applied math, but very little in pure math, and my first reaction (for $x\rightarrow 0$) was "they mean the same thing". Then I thought "but little-o is a more precise upper bound".
What is the relationship between these two notations, and what does the distinction mean practically?
In the briefest possible terms, $f \in O(g)$ is like saying $f \leqslant g$ asymptotically, and $f \in o(g)$ is like saying $f < g$ asymptotically.
As an example, if $f(x) = x$ and $g(x) = 2x$,we have that that $f \in O(g)$, but $f \notin o(g)$ and $g \notin o(f)$. If $f(x) = x^2$ and $g(x) = x$, then $f \in o(g)$ and $f \in O(g)$.
As an aside, be very aware of context. In various probabilistic situations, we define $O$-notation with a limit as $x\rightarrow 0$. In most algorithmic applications, we take limits as $x \rightarrow \infty$. Always make sure it's clear what you're working with before using either notation. $O$-notation is probably the most abused thing in all of mathematics.