E. Landau famously proved in his 17 page PhD thesis from 1899 that $$ \lim_{n\to\infty}\frac{\sum_{k=1}^{n} \lambda(k)}{n^{\frac{1}{2} +\varepsilon}} =0, \; \forall \varepsilon>0$$
for the Liouville-lambda function $\lambda$, is equivalent to the Riemann Hypotheses (RH).
Defining $L(n):= \sum_{k=1}^{n} \lambda(k)$ (Liouville’s partial sum sequence) and using (Landaus) big-O notation we might write the RH then as $$ L(n)<\mathcal{O}(n^{\frac{1}{2}+\varepsilon}), \; \forall \varepsilon>0 \tag{1}.$$
As it is often stated this is not equivalent to $$L(n)=\mathcal{O}(n^{\frac{1}{2}}) \tag{2}$$
This is where I am stuck, as I fail to see the difference between (1) and (2), I would be very grateful if someone could outline/explain this difference. What would fit in-between the two without being either of them?
When it's written as $L(n)=O(n^{1/2+\epsilon})$, keep in mind that $\epsilon$ can never be exactly zero. You've got to think of it as $n$ gets large. No matter what $\epsilon$ you pick, as long as it's positive, $n^{\epsilon}$ will still diverge as n tends to infinity. This is important when comparing $L(n)$ to $n^{1/2}$ as it tells you whether the ratio $\frac{L(n)}{n^{1/2}}$ is bounded, or if it diverges.
The reason a limit argument wouldn't work here is because of implied constants. The statement $\forall\epsilon > 0$ also means that there is a constant, say $c_{\epsilon}$ which depends on $\epsilon$ such that $|L(n)| < c_{\epsilon}n^{1/2+\epsilon}$ for all $n\in\mathbb{N}$. If you were to fix any $\epsilon$ greater than zero, then $c_{\epsilon}$ will be a constant since $\epsilon$ is a fixed number. However, if you let $\epsilon$ tend to zero, suddenly it's no longer fixed but instead a variable. This means $c_{\epsilon}$ is no longer a constant but could potentially approach infinity as $\epsilon$ tends to zero.