In this problem, Resnick states that if $\{X_n,n \geq 1\}$ is an independent sequence of random variables and $S_n := \sum_{i=1}^{n} X_n$, then $\frac{S_n}{n} \overset{a.s}{\to} 0$ if and only if $\frac{S_n}{n} \overset{P}{\to} 0$ and $\frac{S_{2^n}}{2^n} \overset{a.s}{\to} 0$
My question is the following: Why did Resnick include the assumption of independence?
Without independence, it is possible to find a sequence of random variables such that $S_n/n\to 0$ in probability and $S_{2^n}/2^n\to 0$ almost surely but we do not have $S_n/n\to 0$ almost surely.
Indeed, let $\left(Y_i\right)_{i\geqslant 1}$ be an i.i.d. sequence of non-negative random variables such that $$ \sum_{i\geqslant 1}2^i\mathbb P\left(Y_1>2^i\right)=\infty \tag{*} $$ and $$ \forall \varepsilon>0, \sum_{i\geqslant 1} \mathbb P\left(Y_1>\varepsilon 2^i\right)<\infty \tag{**}. $$ Finally, let $X_i=Y_i-Y_{i+1}$, so that $S_n=Y_1-Y_{n+1}$ for each $n$. Then