I have a question about Independence on a collection of events from the statement in Bertsekas and Tsitsiklis' "Introduction to Probability" (page 40):
Independence means that the occurrence or non-occurrence of any number of the events from that collection carries no information on the remaining events or their complements. For example, if the events $A_1$, $A_2$, $A_3$, $A_4$ are independent, one obtains relations such as $$P(A_1 \cup A_2 | A_3 \cap A_4) = P(A_1 \cup A_2)$$ or $$P(A_1 \cup A_2^c | A_3^c \cap A_4) = P(A_1 \cup A_2^c)$$
I don't really understand this statement and can't generalize the statement from the examples given. Can someone please elaborate on what they mean? Does "any number" mean either a union or an intersection of events? I would really appreciate it if you can give more examples as well. Thank you!
Pairwise independence differs from mutual independence - later needs additionally to former independence for any chosen event from intersection from all others. In your example, if we say, that $A_1,A_2,A_3$ are mutual independent, then this means, that independent are events: $A_1$ and $A_2$, $A_1$ and $A_3$, $A_3$ and $A_2$, $A_1$ and $A_2 \cap A_3$, $A_2$ and $A_1 \cap A_3$, $A_3$ and $A_2 \cap A_1$.
There is nice counterexample, that from only pairwise you cannot get mutual. One of the most beautiful moment in probability theory.
Note: it can be shown, that in case of mutual independent, for example, $A_1$ is independent also from $A_3 \cup A_2$: $$P\left( A_1 \cap (A_3 \cup A_2) \right)=P\left( (A_1 \cap A_3) \cup (A_1 \cap A_2) \right) = \\ = P\left( A_1 \cap A_3 \right)+P\left( A_1 \cap A_3 \right) - P\left( A_1 \cap A_2 \cap A_3 \right)= \\ =P\left( A_1 \right)P\left( A_3 \right)+P\left( A_1 \right)P\left( A_2 \right)-P\left( A_1 \right)P\left( A_2 \right)P\left( A_3 \right)= \\=P\left( A_1 \right) P\left( A_3 \cup A_2 \right)$$