Reading the text below I have two questions :
1) I don't manage to do the exercises : prove that $\bigcap_{\alpha \in A}F_{\alpha} = \emptyset$. any help on the proof?
2) If I understand correctly his point, I find it striking that $P(F_{\alpha})=1$ for all $\alpha$ and $\bigcap_{\alpha \in A}F_{\alpha} = \emptyset$ at the same time.
Is he trying to show how non intuitive is probability? And that we shouldn't rely on intuition for studying it?
At the same time, I think to myself, how is it possible that both of those statements are true !!!! I need your comments about that

(1) Given $\omega$ with infinitely many occurrences of $H$ define $\alpha$ by $\alpha(n) = j$ such that $\omega_j$ is the $n^{th}$ occurrence of $H$. Then, $\# \{k \leq n: \omega_{\alpha(k)}=H\}=n$, by construction. Hence, $$\frac{\# \{k \leq n: \omega_{\alpha(k)}=H\}}{n} \to 1 \neq \frac{1}{2}.$$ Thus, $\omega \notin F_\alpha$.
(If $\omega$ has only finitely many occurrences of $H$, then $\omega \notin F_\alpha$ for any $\alpha$.)
We have shown that for all $\omega$, there exists some $\alpha$ such that $\omega \notin F_\alpha$. It follows that there is no $\omega$ that is a member of every $F_\alpha$. In other words, $\bigcap_\alpha F_\alpha = \emptyset$.
(2) The reason is that the intersection ranges over uncountably many sets. A countable intersection of probability $1$ sets has probability $1$ (you can work this out directly from the definition of a probability measure), but the same needn't hold for uncountable intersections, as this and other examples show. I suppose Williams's point is that there's nothing contradictory about this in a measure-theoretic setting, though I don't really understand the invective about philosophers.