I am trying to understand this statement:
A (possibly infinite) collection of events $(A_i)_{i \in I}$ is an independent collection if for every finite subset $J$ of $I$, one has $P(\cap_{i \in J} A_i) = \Pi_{i \in J} P(A_i)$ The collection $(A_i)_{i \in I}$ is often said to be mutually independent.
Warning: If events $(A_i)_{i \in I}$ are independent, they are pairwise independent, but the converse is false. ($(A_i)_{i \in I}$ are pairwise independent if $A_i$ and $A_j$ are independent for all $i, j$ with $i \neq j$.)
I am confused about the contrast that is explained. Could I have an example or perhapse a different take on the explanation? Thank you!
Example
Throw two coins and for $i=1,2$ let $H_i$ be the event that coin $i$ shows heads.
Further let $E$ denote the event that the coins show the same.
Then there is pairwise independence for $H_1,H_2,E$ but no mutual independence.
For this note that: $$P(H_1\cap H_2\cap E)=\frac14\neq\frac12\frac12\frac12=P(H_1)P(H_2)P(E)$$