A question about independence of random variables

64 Views Asked by At

Let $C \sim \hbox{Bernoulli}(p)$. Given $X$ and $Y$ two independent random variables. Consider the following experiment to define a new random variable $Z$: $$ Z=\begin{cases} X & \text{if } C=0 \\ Y & \text{if } C=1 \end{cases} $$

It's intuitive to me to say that $Z$ is a random variable dependent on $C$. But, in my intuition, $C$ is independent of $Z$. So my question is:

How to show this? $$E[C|Z]=E[C]$$

2

There are 2 best solutions below

2
On BEST ANSWER

If $X$ and $Y$ have the same distribution and both are independent of $C$, then $Z$ would also be independent of $C$ because knowing $C$ tells you nothing about the values of $Z$.

If $X$ and $Y$ don't have the same distribution, then, as pointed out in the comments, $Z$ and $C$ would not be independent. As an extreme example, suppose that $X$ is always an even number and $Y$ is always an odd number. Then $Z$ is not independent of $C$ because if you knew that $C=1$, you would also know that $Z$ is odd. Similarly, $C$ is not independent of $Z$ because if you knew $Z$ is even, then you would also know that $C=0$.

I think the confusion comes from the mathematical meaning of "independence" vs a more colloquial meaning. In math, "independence" of two random variables means that knowing information about one tells you nothing about the other. It's a symmetric relationship: If $X$ is independent of $Y$, then $Y$ is independent of $X$. It says nothing about causation. Colloquially, though, if one says that an event $A$ is independent of another event $B$, they might mean that $B$ doesn't cause $A$, or that $A$ doesn't cause $B$, or something else along these lines. This ambiguity is why mathematical definitions are formalized to precise statements.

0
On

$$Z=(1-C)X+CY\, \Rightarrow E(E(C|Z)e^{it Z})=E(Ce^{it Z})=pE(e^{itY})$$ Doing $t=0$ you get $E(C|Z)=E(C).$ (However $C$ and $Z$ are not independent)