My Prob training is rusty now....
Borel Cantelli states for part 1: If $$ \sum_{n=1}^\infty P(E_n)<\infty$$ Then $$P(E_n\text{ occurs infinitely often}) = 0$$
Is the reverse true? i.e. if: $$P(E_n\text{ occurs infinitely often}) = 0$$ can we say $$ \sum_{n=1}^\infty P(E_n)<\infty$$
There's an interesting meta-question that can be asked here. Suppose all that we know are the probabilities $P(A_i)$ for all $i$. We don't know anything about the relationships between the $A_i$ (in other words, we know the marginals of a family of Bernoulli random variables, but nothing about the joint distribution). When can we, despite having only this information, conclude that $P(A_i \text{ i.o.})=0$? Borel-Cantelli says that when $\sum P(A_i)<\infty$, we can conclude this. Could there be other sequences such that $\sum P(A_i)=\infty$, and yet it still must be the case that $P(A_i \text{ i.o.})=0$?
The answer is no. When $\sum P(A_i)=\infty$, it is always possible that the $A_i$ occur infinitely often with non-zero probability. We will construct our $(A_i)$ as intervals of the probability space $[0, 1]$ under the Lebesgue measure. Let $A_1$ be equal to $[0, P(A_1)]$. Now let $A_2$ be of width $P(A_2)$, and stick it immediately on the right hand side of $A_1$, in other words $A_2=[P(A_1), P(A_1) + P(A_2)]$. Continue in this way, concatenating the $A_i$ together from left to right, until we can't fit any more in the unit interval. At that point, start over with the next $A_i$ at $0$ and repeat the process. In this way we lay out the $A_i$ next to each other, performing "passes" across the unit interval. The divergence of $\sum P(A_i)$ ensures that we will perform infinitely many passes.
The set $\{A_i \text{ i.o.}\}$ is precisely
$$\bigcap_{j}\bigcup_{i\geq j}A_i=\bigcap_{j}U_j$$
With our construction, we have guaranteed that each "tail-union" $U_i$ covers essentially the entire unit interval, since each $U_i$ contains of infinitely many of these "passes". Therefore this intersection must be of non-zero measure.
There is a little detail we need to attend to. As we stack up our intervals in one pass across the unit interval, at some point we reach an interval which is the "straw that breaks the camel's back", i.e. the interval which doesn't fit and forces us to start the next pass. Our previous logic is based on the intuition that the passes must cover "most" of the unit interval, but if this last excess interval is large, then the pass that it ended might actually be quite small. But this is not a problem since if the excess interval is "large enough" to make the pass "small", then since it's part of the next pass, the next pass must be "large" as well, ensuring that there are infinitely many "large" passes. That is to say, let $B_1, B_2, ..., B_n$ be one "pass", so that $B_{n+1}$ is the first interval in the next "pass". We must have
$$P(B_1)+P(B_2)+...+P(B_{n+1})>1$$
and therefore by some sort of pigeonhole principle, either
$$P(B_1)+P(B_2)+...+P(B_n)>0.5$$ or $$P(B_{n+1})>0.5$$
In the former case the pass consisting of $B_1, ..., B_n$ covers $[0, 0.5]$, in the latter case the next pass does. This shows that out of any two consecutive passes, at least one covers $[0, 0.5]$, and therefore any tail-union $U_i$ also contains $[0, 0.5]$. In particular $\bigcap U_i$ is of non-zero measure.