Are random variables of distributions always independent?

135 Views Asked by At

$\newcommand{\P}{\mathbb{P}}$ Let $(\Omega_1,F_1,Q_1)$, $(\Omega_2,F_2,Q_2)$ be two probability spaces. Let $X\sim Q_1, Y\sim Q_2$ under $(\Omega,F,\mathbb{P})$.

If we now define $\Omega:= \Omega_1\times \Omega_2, F:=F_1\times F_2$ and $\P:= Q_1\otimes Q_2$, then we can define $X,Y$ as the projections onto the first and second coordinate.

Then we have: $$ \begin{align} &\P(X\in A , Y\in B ) \\ = {} & \P(\{X\in A\}\cap \{Y\in B\}) \\ = {} & \P((A\times \Omega_2)\cap(\Omega_1\times B ))\\ = {} & \P(A\times B) \\ = {} & Q_1(A)\cdot Q_2(B) \\ = {} & (Q_1(A)\cdot Q_2(\Omega_2)) \cdot (Q_1(\Omega_1)\cdot Q_2(B)) \\ = {} & \P(X\in A)\cdot \P(Y\in B) \end{align} $$

However, this only shows that there's a definition of $\P$ so that $X,Y$ are independent.

How do I show that for all definitions of $\P$ the random variables $X,Y$ are independent?


Definition of stochastic independence of random variables as by Georgii: enter image description here

2

There are 2 best solutions below

1
On

Count example:

$\newcommand{\P}{\mathbb P}$ Let $(\Omega_1,F_1,Q_1) = (\Omega_2,F_2,Q_2)=(\Omega,F,\mathbb{P})$ and define $X:=X_2:=X_1$.

Then we have for events $A,B$ with $A\cap B =\emptyset$ and $\P(A) \neq 0 \neq \P(B)$: $$ \P(X\in A , Y\in B ) = \P(\{X\in A\}\cap \{Y\in B\}) = \P(\emptyset) = 0 \neq \P(X\in A)\cdot \P(X\in B) $$

Therefore, we can construct two random variables that are totally dependent, if they both have the same distribution.

1
On

Indeed, your answer is a counterexample. Here is a way to generate a large family of counterexamples.

Let $\Omega_1$ and $\Omega_2$ be finite. Then $Q_1$ is defined by a certain probability mass function $q_1$, so that $Q_1(E)=\sum_{\omega\in E}q_1(\omega)$, and similarly for $q_2$. Furthermore, letting $\def\P{\mathbb P}\P=Q_1\otimes Q_2$, then $\P$ has the mass function $p$, where $p(\omega_1,\omega_2)=q_1(\omega_1)\cdot q_2(\omega_2)$.

Now, choose two particular outcomes $x_1,x_2\in \Omega_1$ and $y_1,y_2\in \Omega_2$ for which $q_1(x_i)>0$ and $q_2(y_i)>0$, for $i\in \{1,2\}$, and choose $\epsilon>0$ which is sufficiently small. Then, define a modified probability measure on $\Omega_1\times \Omega_2$ by the folloiwng probability mass function, which is a slight modification of $p$: $$ \tilde p(\omega_1,\omega_2)= \begin{cases} p(\omega_1,\omega_2)+\epsilon & \omega_1=x_1,\omega_2=y_1\\ p(\omega_1,\omega_2)-\epsilon & \omega_1=x_2,\omega_2=y_1\\ p(\omega_1,\omega_2)-\epsilon & \omega_1=x_1,\omega_2=y_2\\ p(\omega_1,\omega_2)+\epsilon & \omega_1=x_2,\omega_2=y_2\\ p(\omega_1,\omega_2) & \text{otherwise}\\ \end{cases} $$ You can verify that $\tilde p$ defines a measure on $\Omega_1\times \Omega_2$ whose marginal distributions on $\Omega_1$ and $\Omega_2$ are equal to $Q_1$ and $Q_2$, respectively. However, $\tilde p$ is no longer the product measure, so the random variables $X$ and $Y$ are no longer independent.