I'm studying probability theory through these notes and I can't quite understand what's being said on page 10 (in relation to the Weak Law of Large Numbers):
Remark 4.6: Strictly speaking, we bent our rules here. An infinite sequence of non-constant, pairwise independent variables requires an infinite sample space.
Just so I understand what is being said: Suppose $(\Omega, \mathcal{F}, \mathbb{P})$ is our probability space and $\Omega = \{\omega_1,..,\omega_n\}$ is finite. Is the above statement saying that if $\{\xi_i\}_{i\in\mathbb{N}}$ is an infinite sequence of independent RV's on $(\Omega, \mathcal{F}, \mathbb{P})$, then they must all be constants almost surely? I can't quite see why that is the case in general.
Without loss of generality, we can remove the elements of $\Omega$ which has zero probability.
Note that, as $\Omega$ is finite, the probability measure $P$ on it can only take finitely many values.
Assume $a_i\in\Bbb R$ are in the range of $\xi_i$ (with positive probability), then by independency we have $$P\left(\bigwedge_{i\le N}\xi_i=a_i\right)\ =\ \prod_{i\le N}P(\xi_i=a_i)$$ Now if infinitely many of the $\xi_i$'s were nonconstant, it means that $P(\xi_i=a_i)\in (0,1)$ and these products would generate infinitely many probability values, contradicting to the above observation.
Thus, we must have only finite many nonconstants among $\xi_i$'s.