I'm trying to understand in a practical way the meaning of zero probability events.
Let's say that for a certain probability space $(\Omega, \mathcal{F}, P)$ we have an event $E \in \mathcal{F}: P(E)=0$
Now, let $X(\omega) = 1_E(\omega)$ be a binary variable defined over that space. Let $(X_i)_1^{\infty}$ be an infinite sequence of iid realizations of $X$ and $S = \sum_1^{\infty}X_i$ be an infinite sum of these iid variables.
What I want to understand is what can be said about $S$ for probability-zero events like $E$.
Since $E[X]=0$, then the SLLN implies that only a finite number of $X_i$ can be positive (via the SLLN). I.e., the tail event E="$X_i=0$ occurs finitely often" occurs with probability 1 [ more details below on my thinking here]
Details on finiteness of iid sums of zero-probability events
One way think about a probability zero event $A$ is as the expected value of the indicator random variable $1_{A}$ defined on the unit square.
Let $X$ be a uniform random variable on the unit square. Then we have
$$E[1_A] = \lim_{n \to \infty} \frac1n \sum_1^n 1_A(X_i) =0$$
So the fraction of times $A$ happens in an iid sequence converges almost surely to $0$. So if we kept taking larger and larger iid samples of $X$ we'd see $X_i \in A$ becomes a vanishingly small proportion of our sample.
Taking this one step further, consider the sum of the indicator functions:
$$S_n:= \sum_1^n 1_A(X_i)$$
Since the $1_A(X_i)$ are iid, we can invoke the Borel-Cantelli Lemma $$\text{Let $E_n$ be a sequence of events then} \sum_1^{\infty} P(E_n) < \infty \implies P\left(\lim_{n \to \infty}\sup E_n\right) = 0 \;\text{(i.e., a finite number of $E_n$ will occur, almost surely)}$$
to show that $$P\left( \lim_{n \to \infty} S_n <\infty\right) = 1$$
Proof
$$P(1_{A}(X_i)=1)=0 \implies \sum_1^{\infty} P(1_{A}(X_i)=1) = 0 < \infty \implies P\left(\lim_{n \to \infty} \sup 1_{A}(X_i)\right) = 0$$
Which means that even in countably infinite samples you'd see $A$ only occur a finite number of times (almost surely -- still possible that you hit it infinitely often).
A similar conclusion is demonstrated in these two papers:
Lemma 3 (Page 4, item G) and the section right at the top of pdf page 5 which relates this to infinite sums of indicator variables: https://viterbi-web.usc.edu/~mjneely/Borel-Cantelli-LLN.pdf
And corollary 2.3 here: https://ocw.mit.edu/courses/18-304-undergraduate-seminar-in-discrete-mathematics-spring-2015/2dc1c9e37d402c000b628ee85e2228d1_MIT18_304S15_project2.pdf
We can say $S = 0$ almost surely. Since $P(E)=0$, we have $P(X_i = 0) = 1$ for all $i$. Thus \begin{align*} P(S = 0) &= P\left(\bigcap_{i=1}^\infty \{X_i = 0\}\right) \\ &= \prod_{i=1}^\infty P(X_i = 0) \tag{by independence} \\ &= 1 \end{align*}