Intuition behind and explanation of independence of random variables

278 Views Asked by At

In my course we define two real-valued random variables $X,Y$ to be independent if their $\sigma$-algebras are independent. If both $\sigma$-algebras are over the same set ($E$, say) we defined $\sigma(\mathcal{A}_1),\sigma(\mathcal{A}_2) \subseteq \mathcal{P}(E)$ (the power set) to be independent if for any $A_1 \in \sigma(\mathcal{A}_1), A_2 \in \sigma(\mathcal{A}_2)$ we have $\mathbb{P}(A_1 \cap A_2) = \mathbb{P}(A_1)\mathbb{P}(A_2)$.

We then defined the $\sigma$-algebra generated by a random variable as $\sigma(X) = \{X^{-1}(B), B \in \mathcal{B}\}$ where $\mathcal{B}$ is the Borel $\sigma$-algebra on $\mathbb{R}$. Now I can see how to apply the above to check for independence of two random variables that both take values in the same $E$.

However suppose I had that $X$ is the outcome of a dice roll so $X$ takes values in $\{1,2,3,4,5,6\}$ while $Y$ is $0$ if it is not raining outside and $1$ if it is raining outside. As these are unrelated we want $X,Y$ to be independent.

We have that $\sigma(X) = \mathcal{P}(\{1,2,3,4,5,6\})$ while $\sigma(Y) = \mathcal{P}(\{\text{raining, not raining}\})$. To prove these $\sigma$-algebras are independent I need to show for all $A_1 \in \sigma(X),A_2,\in\sigma(Y)$ we have $\mathbb{P}(A_1 \cap A_2) = \mathbb{P}(A_1)\mathbb{P}(A_2)$. However I don't understand how to even interpret $A_1 \cap A_2$ as they are sets over different values?

Can someone please tell me what is the best way to think about this? Thank you in advance!

1

There are 1 best solutions below

2
On BEST ANSWER

We need two random variables to be on the same probability space before talking about their independence or dependence. Otherwise the concept is meaningless (that is, within the context of modeling random outcomes as a probability space $(\Omega,\Sigma, P)$ ).

You say it's obvious that the dice roll is independent of the rain, but it's at least a logical possibility that it isn't, no?. Well, let's make the reasonable assumption that it is. So, we should model the dice roll and the rain as independent random variables on some probability space. Let $\Omega = \{1,2,3,4,5,6\}\times \{r,n\}$ and let $\Sigma = 2^\Omega.$ Let the outcome $(2,r)$ be that we roll a two and it rains, whereas $(4,n)$ means we roll a four and it doesn't.

Then we let $P(\{(i,r)\}) = \frac{1}{6}p_r$ for every $i$ where $p_r$ is whatever the probability of rain is. And we let $P(\{(i,n)\})= \frac{1}{6}(1-p_r),$ then extend to the rest of the events by additivity.

Then the random variable of the dice roll just projects the first coordinate of $\omega,$ and the random variable for the rain projects the second coordinate (and maps $r$ to $1$ and $n$ to $0$, if we want to make it real-valued).