Implications of conditional independence

262 Views Asked by At

Suppose two $\sigma$-algebra's $\mathcal{F}_1,\mathcal{F}_2$ are conditionally independent given some $\sigma$-algebra $\mathcal{G}$ i.e. for any $A\in \mathcal{F}_1$, and $B\in\mathcal{F}_2$, we have

$P(A\cap B|\mathcal{G}) = P(A|\mathcal{G})P(B|\mathcal{G})$

is it true that for any $C\in \mathcal{G}$ we have for any $A\in \mathcal{F}_1$, and $B\in\mathcal{F}_2$,

$P(A\cap B|C) = P(A|C)P(B|C)$?

I have been trying to show this and this is what I have

$P(A\cap B|C) = \frac{1}{P(C)} P(A\cap B\cap C) =\frac{1}{P(C)}* \mathbb{E}[\mathbb{E} (1_A *1_B*1_C|\mathcal{G})]$

$ =\frac{1}{P(C)} *\mathbb{E}[\mathbb{E}(1_A*1_C|\mathcal{G})\mathbb{E}(1_B*1_C|\mathcal{G})] $

At this point I feel like I should be able to use the definition of conditional expectation, but I am not sure what to do with the product of conditional expectations.

1

There are 1 best solutions below

0
On BEST ANSWER

The reason why you are having difficulty showing this is that the assertion is not true! If it were, then setting $C=\Omega$ we would conclude $$P(A\cap B) = P(A\cap B\mid \Omega) = P(A\mid \Omega) P(B\mid \Omega) = P(A)P(B)$$ for every $A\in\cal F_1$ and every $B\in\cal F_2$, i.e., we would deduce that the sigma-algebras $\cal F_1$ and $\cal F_2$ are unconditionally independent.

What's the intuition behind this negative result? Suppose ${\cal G}=\sigma(Z)$, where $Z$ is a discrete random variable. If $\cal F_1$ and $\cal F_2$ are conditionally independent given $\cal G$, it is straightforward to prove that $$P(A\cap B\mid Z=z)=P(A\mid Z=z)P(B\mid Z=z)$$ for any $z$ and any $A\in\cal F_1$ and $B\in\cal F_2$. However, $\cal G$ contains more events than those of the form $\{Z=z\}$, including events where the asserted claim fails to hold. The problem is that $\cal G$ is too big.