I'm trying to learn probability on my own and have recently been studying random variables. The book I'm using provides an explanation of why the criterion for event independence is different than the criterion for random variable independence but I just can't get my head around it.
"Definition 3.8.2 (Independence of many r.v.s). Random variables $X_1 , \ldots , X_n$ are independent if \begin{align} & P (X_1 \leq x_1 , \ldots , X_n \leq x_n ) \\[6pt] = {} & P (X_1 \leq x_1 ) \cdots P (X_n \leq x_n ), \text{ for all } x_1 , \ldots , x_n \in\mathbb R.\end{align} For infinitely many r.v.s, we say that they are independent if every finite subset of the r.v.s is independent. Comparing this to the criteria for independence of $n$ events, it may seem strange that the independence of $X_1 , \ldots , X_n$ requires just one equality, whereas for events we needed to verify pairwise independence for all $\binom{n}{2}$ pairs, three-way independence for all $\binom{n}{3}$ triplets, and so on. However, upon closer examination of the definition, we see that independence of r.v.s requires the equality to hold for all possible $x_1 , \ldots , x_n$ -- infinitely many conditions!"
So somehow, the criteria that each r.v. being tested for independence can take on any value and have the equality still hold allows us to infer that there is tuple-wise independence between each r.v. being tested as well, unlike the criteria for events. Can someone help illuminate this for me?
For independent random variables I prefer following definition: $$P(X \in A, Y \in B) = P(X \in A)P(Y \in B)$$ For $\forall A, B$ from given $\sigma$-algebra. Imho, from here is more easy to see, that pairwise independence do not imply joint independence.