I am reading a paper and it has the following conclusion which I am not sure how was deduced. We first suppose that we have a sequence of random variables $X_n\in \mathbb{L}^2$. The main result says that there exists a probability space $\Omega$ supporting a sequence of random variables $S_n$ which have the same distributions as $\sum\limits_{j=0}^{n-1}X_j$, and a sequence $Z_n$ of i.i.d. random variables with distribution $\mathcal{N}(0,\sigma^2)$ for some $\sigma^2>0$ such that
$\sup\limits_{1\leq k\leq n}\left|S_k-\sum\limits_{j=1}^n Z_j\right|=o\left(\left(n\log\log{n}\right)^\frac{1}{2}\right)$. (this is known as an 'almost sure invariance principle)'.
The paper then goes on the suggest that this is sufficient to deuce the law of iterated logarithm for $X_n$:
$\sum\limits_{j=0}^{n-1}X_n=O\left(\left(n\log\log{n}\right)^\frac{1}{2}\right)$ a.e.
Does the following methodology work?
$\frac{S_n}{\left(n\log\log{n}\right)^\frac{1}{2}}=\frac{\left(S_n-\sum\limits_{j=1}^nZ_j\right)}{\left(n\log\log{n}\right)^\frac{1}{2}}+\frac{\sum\limits_{j=1}^nZ_j}{\left(n\log\log{n}\right)^\frac{1}{2}}$.
The first term tends to $0$ as $n\to\infty$ by the main result. The second term is bounded by $\sqrt{2}\sigma$ by the classical law of iterated logarithm.