Does intersection mean set theoretic point of view for independent events?

39 Views Asked by At

It is evident that when two events $A$ and $B$ are independent, then $P(A\cap B)=P(A)P(B)$ A good example is, "Tossing a coin and rolling a die". What is the probability that heads occurs on coin and even number occurs on die.

We have event $A:$ Heads occur on coin

an event $B:$ $2,4$ or $6$ occurs on die

We write the probability as $P(A\cap B)=\frac{1}{2}\times \frac{1}{2}=\frac{1}{4}$

But when we write sample space we get $A=\left\{H\right\}$ and $B=\left\{2,4,6\right\}$ But $A\cap B=\phi$ So why is this contradiction?

2

There are 2 best solutions below

0
On BEST ANSWER

You haven't defined a common probability space for your coin and your die. You are basically talking about two outcomes that occur in separate universes, and so talking about their intersection is meaningless.

To fix that, let's build a common space for these two experiments. Define $$ \Omega:=\{H,T\}\times\{1,2,3,4,5,6\}. $$ Then the outcome of your coin is just the first coordinate projection, and the outcome of your die is just the second. And in this case, the formal translations of your two events would be $$ A:=\{(H,1),(H,2),(H,3),(H,4),(H,5),(H,6)\} $$ and $$ B:=\{(H,2),(H,4),(H,6),(T,2),(T,4),(T,6)\}. $$

Then you have $$ A\cap B:=\{(H,2),(H,4),(H,6)\} $$ Now, assuming you define your probability measure $P$ on $\Omega$ to be the uniform distribution, you can verify that:

  1. The coin and die each have the correct (marginal) distribution
  2. The events $A$ and $B$ have exactly the probabilities you would expect ($\frac{1}{2}$)
  3. The intersection has probability $\frac{1}{4}$, just as it should for independent events $A$ and $B$.
0
On

The actual space of outcomes from these two events is not what you suggest, but is instead the set $\Omega = \{H, T\} \times \{1, \cdots, 6\}$. Now we have two events. Getting a heads is getting an outcome that is in the set $$\mathcal{H} = \{(H, 1), (H, 2), \cdots, (H, 6)\}$$ while getting an even number on the die is getting an outcome that is in the set $$\mathcal{E} = \{(H, 2), (T, 2), \cdots, (H, 6), (T, 6)\}$$ so when we ask what is the probability that we get both, we are asking what the probability of getting an outcome in $\mathcal{H} \cap \mathcal{E}$. We say that the two events are independent if and only if $$P(\mathcal{H}) \cdot \mathcal{P}(\mathcal E) = P(\mathcal{H} \cap \mathcal{E})$$ where $P(S) = \sum_{s \in S} P(s)$. Now of course, for any event $s \in \Omega$, $P(s)$ will change depending on modelling assumptions. So in some cases rolling a die and flipping a coin might not be independent. Of course, examples where this happens in the real world will mostly be contrived/unrealistic, but this is absolutely allowed in theory! (Math doesn't distinguish between a coin, a die, etc. What's relevant is the space of outcomes and the respective probabilities of obtaining each outcome.) The only way to know truly if two events are independent is to apply the definition above.