Let's say we have a function $f_{\varepsilon}(x)$ and we know that $$\lim_{\varepsilon \to 0} f_\varepsilon(x)=g(x).$$ This is equivalent to saying that $f_\varepsilon(x)=g(x)+o_{\varepsilon\to 0}(1)$. I am now interested in using this approximation of the function $f_\epsilon(x)$ in some calculations. For example, say $f_\epsilon(x)$ is a matrix-valued function and I know that $\lim_{\varepsilon \to 0} f_\varepsilon(x)=g(x)$ where $g(x)$ is a matrix with the same columns. Letting $f_\epsilon(x)=\prod_{k=0}^{\frac{1}{\sqrt{\varepsilon}}-1}f_k(x)$, knowing that $f_\epsilon(x)\to g(x)$ as $\varepsilon\to 0$, why is it that I can't say that $f_\varepsilon(x)=g(x)+o_{\varepsilon\to 0}(1)$ in this situation or is this valid?
The reason for my question is that I know that $\prod_{k=0}^\infty f_k(x)$ converges and so I wanted to understand how each part of this product contributes to the final limit. However, when I make this approximation I obtain a result that does not agree with numerical experiments. I suspected that it had something to do with the $\sqrt{\varepsilon}$ scaling but I wasn't too sure.