This can be found in David Williams p.113-114.
Suppose that $(a_n)$ is a sequence of real numbers and that $(\epsilon_n)$ is a sequence of IID random variables with $P(\epsilon_n=\pm1)=\frac{1}{2}$. The result of 12.2 shows that
$(1) \sum \epsilon_na_n \text{ converges a.s. if and only if} \sum a_n^2<\infty $
and
$(2)\sum\epsilon_na_n \text{ oscillates infinitely if and only if} \sum a_n^2=\infty $
It's very clear to me why the first equivalence is true. Just the usage of 12.2 in David Williams which states that:
(12.2) $\sum X_k \text{ converges a.s. if and only if} \sum Var(X_k)<\infty$ for a sequence of independent zero mean random variables $(X_k)$.
I want to understand now why the second equivalence is true.
For $"\Rightarrow"$:
If $\liminf_{n\rightarrow \infty}\sum \epsilon_na_n\neq \limsup_{n\rightarrow \infty}\sum \epsilon_na_n$ then $\sum \epsilon_na_n$ does not converge and theorem 12.2 yields that $\sum Var(\epsilon_na_n)=\sum a_n^2=\infty$ as desired.
But now I don't how to show the other direction. Can someone help?
Old question, but I have been working through Williams's book recently and was also puzzled by his remark here.
By 12.2.a, if $\sum_n a_n^2 < \infty$ then $\sum_n a_n\epsilon_n$ converges a.s., and we certainly don't have almost sure infinite oscillation. Conversely, suppose the series does not almost surely oscillate infinitely; we'll show that the series converges pointwise a.s. and that $\sum_n a_n^2<\infty$.
Let $S_n = \sum_{k=1}^n a_n\epsilon_n$ and note that \begin{align*} \mathbb{P}(\{\limsup S_n = \infty\}) &= \mathbb{P}(\bigcap_{M=1}^\infty \bigcup_{n=1}^\infty \{S_n\geq M\})\\ &= \lim_{M\to\infty}\mathbb{P}(\bigcup_{n=1}^\infty \{S_n\geq M\})\\ &=\lim_{M\to\infty}\lim_{N\to\infty}\mathbb{P}(\{\exists n\in\{1,\dots,N\}:S_n> M\})\\ &=\lim_{M\to\infty}\lim_{N\to\infty}\mathbb{P}(\{\exists n\in\{1,\dots,N\}:S_n < -M\})\\ &\vdots\\ &=\mathbb{P}(\{\liminf S_n = -\infty\}) \end{align*} where the equality $$\lim_{M\to\infty}\lim_{N\to\infty}\mathbb{P}(\{\exists n\in\{1,\dots,N\}:S_n > M\})=\lim_{M\to\infty}\lim_{N\to\infty}\mathbb{P}(\{\exists n\in\{1,\dots,N\}:S_n < -M\})$$ follows since since the random variables $\epsilon_n$ are unbiased.
The events $\{\limsup S_n = \infty\}$ and $\{\liminf S_n = -\infty\}$ belong to the tail $\sigma$-algebra of $(\epsilon_n)_{n \in\mathbb{N}}$, so Kolmogorov's zero-one law gives $$ \mathbb{P}(\{\limsup S_n = \infty\})=\mathbb{P}(\{\liminf S_n = -\infty\})\in\{0,1\} $$ Hence, either $\sum_n a_n\epsilon_n$ almost surely oscillates infinitely or both of the above probabilities are zero.
Since $\sum_n \epsilon_n a_n$ does not almost surely oscillate infinitely, we get $$ \mathbb{P}(\{\limsup S_n = \infty\}\cup\{\liminf S_n = -\infty\}) = 0 $$ taking the complement of the above event, we get $$ \mathbb{P}(\bigcup_{M\in\mathbb{N}}\bigcap_{n\in\mathbb{N}} \{S_n \in [-M,M]\}) = \lim_{M\to\infty}\mathbb{P}(\bigcap_{n\in\mathbb{N}}\{S_n \in [-M,M]\})= 1 $$
Hence the terms $a_n$ must be bounded; otherwise, for any $M>0$, the inequality $a_n \geq 3M$ would hold for infinitely many $n$, and then, whenever $S_n \in [-M,M]$ and $|a_{n+1}|\geq 3M$, we would have $|S_{n+1}| \geq 2M$; this would imply that $ \mathbb{P}(\bigcap_{n\in \mathbb{N}} \{S_n \in [-M,M]\}) = 0 $ for each $M$, which is impossible in light of the last paragraph.
So we get that $$ \{a_n\}_{n\in\mathbb{N}}\text{ is bounded}\qquad\text{and}\qquad\mathbb{P}(\{S_n\text{ is bounded}\}) = 1 $$ Then $(\epsilon_n a_n)$ is a sequence of zero-mean uniformly bounded random variables the partial sums of which are bounded with nonzero probability. By the remark on page 113 of Williams's book (essentially by the proof of 12.2.b,), the fact that the partial sums $S_n$ are a.s. bounded implies that $S_n$ converges a.s. and that $\sum_n a_n^2<\infty$.
To summarize, the above argument proves the implications $$ \sum_n a_n^2 < \infty \Longrightarrow \sum_n a_n\epsilon_n\text{ converges a.s.}\Longrightarrow \mathbb{P}(\{\limsup S_n = \infty\}\cap \{\liminf S_n = -\infty\}) \neq 1 $$ and $$ \mathbb{P}(\{\limsup S_n = \infty\}\cap \{\liminf S_n = -\infty\}) \neq 1\Longrightarrow \sum_n a_n^2 < \infty \text{ and }\sum_n a_n\epsilon_n\text{ converges a.s.} $$ that is, $$ \mathbb{P}(\{\limsup S_n = \infty\}\cap \{\liminf S_n = -\infty\}) \neq 1\Longleftrightarrow \sum_n a_n^2 < \infty \Longleftrightarrow\sum_n a_n\epsilon_n\text{ converges a.s.} $$