Let $Z_1, Z_2, ...$ be independent random variables in the same probability space defined as follows:
$$P(Z_n=n)=P(Z_n=-n)=\frac{1}{2n^2} \space \mathrm{and} \space P(Z_n=0)=1-\frac{1}{n^2}$$
Is it true that
$$\lim \frac{1}{\sqrt{n}}(Z_1+...+Z_n) = 0 \ \text{a.s.}$$
?
I thought of using the Borel-Cantelli Lemmas, but I don't know how to calculate the probability that $$P(|Z_1 + \ldots + Z_n| \ge \epsilon)$$ where $\epsilon > 0$.
I already tried to apply Chebyshev's inequalities, but it didn't work.
Hint:
Using BCL1, it looks like we have
$$P(\liminf(Z_n \ne n)) = 1$$
$$P(\liminf(Z_n \ne -n)) = 1$$
I think we can conclude that
$$P(\liminf(Z_n = 0)) = 1 \tag{*}$$
because:
$\exists m_1 \ge 1$ s.t. $Z_{m_1} \ne m_1$, $Z_{m_1+1} \ne m_1+1$, ...
$\exists m_2 \ge 1$ s.t. $Z_{m_2} \ne m_2$, $Z_{m_2+1} \ne m_2+1$, ...
So I think for $m := \max\{m_1, m_2\}$, we have
$$Z_m \ne m, -m$$
$$Z_{m+1} \ne m+1, -(m+1)$$
$$\vdots$$
which I think is $(*)$
Also not sure if relevant but by BCL2, it looks like
$$P(\limsup(Z_n = 0)) = 1$$
As for computing
$$p_n := P(|\sum_{i=1}^{n} Z_i| > 0)$$
If you're going to use BCL2 on $\sum p_n$, assuming it converges, it looks like you'll be able to conclude
$$P(\limsup(|\sum_{i=1}^{n} Z_i| > 0)) = 0$$
$$\to P(\liminf(|\sum_{i=1}^{n} Z_i| = 0)) = 1$$
$$\to P(\liminf(\sum_{i=1}^{n} Z_i = 0)) = 1$$
This looks pretty strong to me, but I guess it'll work assuming you compute $p_n$ and $\sum p_n$ converges.