Independence of two die rolling events- does it really mean anything?

86 Views Asked by At

Consider throwing a fair die with sample space $\Omega=\{ 1, 2, 3, 4, 5, 6 \}$. Consider $A= \{ 1, 3, 5 \}$, $B= \{ 1, 2 \}$ and $C= \{ 1, 2, 3 \}$.

We see that $A$ and $B$ are independent since

$$P(A \cap B)=\frac16 \text{ and } P(A).P(B)=\frac16$$

and $A$ and $C$ are not independent since

$$P(A \cap C)=\frac13 \text{ and } P(A).P(C)=\frac14.$$

What does this mean? How can I make sense of something like

"The events of getting an odd number or $1,2$ are independent, but the events of getting an odd number and $1,2,3$ are not"?

Is there a physical/intuition element to it?

1

There are 1 best solutions below

4
On BEST ANSWER

For me, independence makes the most sense when you think about it's definition using conditional probabilities:

$A$ and $B$ are independent iff $P(A|B)=P(A)$, where $P(A|B):= \frac{P(A\cap B)}{P(B)}$

You read $P(A|B)$ as "The Probability that event A will happen given assuming event B will happen". What its doing is restricting our sample space to only the outcomes that are in $B$, so we want to only look at the part of $A$ that falls in $B$ and only want to consider the probability that $B$ happened, hence the numerator and denominator.

Therefore, the statement "$A$ and $B$ are independent iff $P(A|B)=P(A)$" means that knowing $B$ will occur does not affect how likely $A$ is to occur.

For your examples:

$$P(A|B) = \frac{P(A \cap B)}{P(B)} = \frac{\frac16}{\frac13} = \frac12 = P(A)$$

So knowing that $B$ will happen shouldn't change how likely it is $A$ will happen.

Compare to:

$$P(A|C) = \frac{P(A \cap C)}{P(C)} = \frac{\frac13}{\frac12} = \frac23 \neq P(A)$$

Therefore, if I told you that $C$ will occur, should increase the probability that $A$ will occur (e.g., some tipster told you the die was rigged to only have outcomes in $C$).

UPDATE per OP's request to clarify why two events are "independent":

Warning -- will get a little "philosophical" before getting into the specifics.

The concept of "independent events" is a unique concept in probability theory that distinguishes it from the more general measure theory that it is based on.

For two events to be independent they need to have an extremely precise relationship between them (think: overlap in their venn diagrams). Too far apart and they become disjoint too close together and they become identical, neither of which captures the idea of independence as "Knowing A happened tells you nothing about whether B happened".

The conditional probability view of independence gives a mathematically precise definition of what this means, but what is it saying conceptually?

What it is saying is that A "looks the same" when we restrict our world to "B" as it does when we don't restrict it (i.e., unconditional). That is what conditioning does -- to "condition on B" means we throw out all parts of the sample space $\Omega$ that are not part of B and define a new sample space $\Omega_B$, and a new probability measure $P_B$ such that $P_B(\Omega_B)=1$

Here "looks the same" means $P(A)=P_B(A)$ -- that is, A takes up the same fraction of "space" (probability) both unconditionally (i.e., considering full universe of events) and conditionally (i.e., re-defining $\Omega$ to be $B$ instead of all events). It turns out $P_B(A)=P(A|B) = \frac{P(A \cap B)}{P(B)}$ satisfies this on the redefined sample space $\Omega_B$.

One level up conceptually, what we are seeing is an instance of self similarity -- $A \cap B$ has the exact same relationship to B as $A$ has to $\Omega$, just like the "Golden spiral".

This is why assuming a set of events are independent is such a strong assumption -- it is imposing a very delicate/exact requirement on how much they each overlap each other.