I am trying to understand an equality in this paper about locally stationary processes on page 24.
The equality includes stochastic Big-O (see D5 in this paper for a definition of stochastic O-notation) and is as follows:
$$ \sum_{j = 0}^T \left[\prod_{k=0}^{j-1} \alpha\left(\frac{t-k}{T}\right)\right]\varepsilon_{t-j} = \sum_{j = 0}^T \alpha\left(\frac{1}{T}\right)^j \varepsilon_{t-j} + O_p\left(\frac{1}{T}\right), $$ for $T \rightarrow \infty$ where $\varepsilon_{t-j}$ is white noise. (I have left some coefficients from the paper out and I have only considered finitely many summands here for simplicity)
Here I have already proved that $$ \prod_{k=0}^{j-1} \alpha\left(\frac{t-k}{T}\right) = \alpha\left(\frac{1}{T}\right)^j + O\left(\frac{1}{T}\right) $$ with deterministic Big-O notation.
However, if I plug this result into the equation above, I get
\begin{align*} \sum_{j = 0}^T \left[\prod_{k=0}^{j-1} \alpha\left(\frac{t-k}{T}\right)\right]\varepsilon_{t-j} &= \sum_{j = 0}^T \left[\alpha\left(\frac{1}{T}\right)^j + O\left(\frac{1}{T}\right)\right]\varepsilon_{t-j} \\ &= \sum_{j = 0}^T \alpha\left(\frac{1}{T}\right)^j \varepsilon_{t-j} + \sum_{j = 0}^T O\left(\frac{1}{T}\right) \varepsilon_{t-j}. \end{align*}
Now we would need the following:
$$ \sum_{j = 0}^T O\left(\frac{1}{T}\right) \varepsilon_{t-j} = O_p\left(\frac{1}{T}\right)$$
It is easy to see that $O\left(\frac{1}{T}\right) \varepsilon_{t-j} = O_p\left(\frac{1}{T}\right)$, but summing up I would get $$ \sum_{j = 0}^T O\left(\frac{1}{T}\right) \varepsilon_{t-j} = \sum_{j = 0}^T O_p\left(\frac{1}{T}\right) = T O_p\left(\frac{1}{T}\right) = O_p(1)$$ and not the result above.