Let $(X_1,\ldots,X_n)$ be a random sample with density $f$. Let $\hat{F}_n$ be the ECDF and define: $$\hat{f}_n(t)=\frac{\hat{F}_n(t+\lambda_n)-\hat{F}_n(t-\lambda_n)}{2\lambda_n},$$ where $\hat{f}_n$ is an estimator for density $f$ and $\lambda_n$ a sequence of positive constants. Given that $f$ is continuously differentiable and as $n\rightarrow\infty$, $\lambda_n\rightarrow 0$ and $n\lambda_n \rightarrow \infty$, I have been able to derive the following for the MSE. \begin{align} \text{bias}(\hat{f}_n(t)) &=O(\lambda_n)\\ \text{variance}(\hat{f}_n(t)) &= \frac{1}{2n\lambda_n}f(t)+O(1/n) \end{align} By combining these we find: $$\text{MSE}(\hat{f}_n(t))=\frac{1}{2n\lambda_n}f(t)+O(1/n)+O(\lambda_n^2).$$ So far so good; the lecture notes I took this example from then continue to say $O(1/n)=o(\frac{1}{n\lambda_n})$ and this is logical. We know that $\text{error}\leq M\cdot\frac{1}{n}$, thus $\text{error}\cdot n \leq M$ and since $\lambda_n\rightarrow 0$ as $n\rightarrow \infty$ we see that $\text{error}=o(\frac{1}{n\lambda_n})$
My question is: Why would we want to have $o(\frac{1}{n\lambda_n})$ as an error term? I have seen computations of asymptotic behaviour of MSE for an estimator before. In this case we may for instance say that $\lambda_n=c\cdot n^{-1/3}$ provides the best asymptotic rate. It seems that we want a little-o error term because we can disregard it in this computation. Is it always the case that we may disregard little-o terms for asymptotic rates? Please provide some intuition.