If we consider $(Xn)_{n \in \mathbb{N}}$, a sequence of independent random variables satisfying: $$ P(X_n = 1) = p_n \;\;\;\;\;\;\;\;\;\; P(X_n = 0) = 1 − p_n$$ then :
$$X_n \;{\overset{a.s}{\longrightarrow}} \;0 ⇔\sum_{n \in \mathbb{N}} p_n < ∞$$
I couldn't show the first implication from convergence a.s to the sum of $p_n$ is finite!
In this context with $X_n$ independent, the partial converse to Borel-Cantelli reads: $$\sum p_n=\infty \implies P(X_n=1\text{ i.o.})=1.$$ The contrapositive of that statement is that $$P(X_n=1\text{ i.o.})\neq 1 \implies \sum p_n <\infty.$$ (Note that by the zero-one law $P(X_n=1\text{ i.o.})\in\{0,1\}$.) The key assumption here is independence: we didn't need it to show $\sum p_n<\infty \implies X_n\to0$ a.s., but it is necessary for the partial converse to Borel-Cantelli.
Another way to see this whole thing is to let $X_n$ be the indicator random variables for the events $E_n$ in the typical phrasing of the Borel-Cantelli lemma and its partial converse. Then of course $p_n=P(E_n)$ and $P(E_n\text{ i.o.})=0$ is the same as $X_n\to0$. So this is really just rephrasing that lemma in terms of Bernoulli random variables instead of events, but those are more or less equivalent ideas.