why do independent variables have zero covariance?

1k Views Asked by At

I can see it how happens from the expression using $E[XY]=E[X]E[Y]$, but I can't understand it intuitively. $$cov(x,y)=E[(x-E[x])(y-E[y])]$$ for independent random variables x and y.

How do you guys understand the logic behind it being $0$? Figures explaining would be just awesome.

2

There are 2 best solutions below

2
On

Because $X$ and $Y$ are independent, the value of one does not affect the value of the other. So consider the expectation over each of the two variables separately, ie. first consider:

$E_X[(X - E(X))(Y - E(Y))]$

Because they're independent, nothing that happens inside the expectation really affects what happens to $Y$, which is as good as if it were a constant (just like if you take the partial derivative of something with respect to $x$, you treat $y$ as a constant). So we pull the $Y$ bit out, giving us $E_X[X - E(X)] \times (Y - E(Y))$, but the bit on the left is "the expected difference between $X$ and its expected value" which is of course $0$. You can do the same thing with the expectation over $Y$, and you can also apply the law of iterated expectations to show that the whole thing must thus be zero.

0
On

By the definition of Covariance, generally:

$$\mathsf {Cov}(X,Y)~{=~\mathsf E((X-\mathsf E(X))(Y-\mathsf E(Y)) \\ = \mathsf E(XY-X\mathsf E(Y)-\mathsf E(X)Y+\mathsf E(X)\mathsf E(Y)) \\ = \mathsf E(XY)-\mathsf E(X)\mathsf E(Y) }$$

We have in the case of independence

$$\begin{align}\mathsf E(XY)~&=~\mathsf E(~\mathsf E(XY\mid X)~) &\text{Law of Total Expectation}\\[1ex]&=~\mathsf E(~X~\mathsf E(Y\mid X)~) \\[1ex] &=~ \mathsf E(X~\mathsf E(Y)) & \text{independence}\\[1ex] &=~ \mathsf E(X)~\mathsf E(Y) \end{align}$$

Then when independent the covariance is zero.


Alternatively we could say

$$\begin{align}\mathsf E\Big(\big(X-\mathsf E(X)\big)\big(Y-\mathsf E(Y)\big)\Big) ~&=~\mathsf E\Big(\mathsf E\Big(\big(X-\mathsf E(X)\big)\big(Y-\mathsf E(Y)\big)~\Big\vert~ X\Big)\Big) & \text{Law of Total Expectation} \\[1ex] &=~\mathsf E\Big(\big(X-\mathsf E(X)\big)~\mathsf E\big(Y-\mathsf E(Y)\mid X\big)\Big) \\[1ex] &=~\mathsf E\Big(\big(X-\mathsf E(X)\big)~\mathsf E\big(Y-\mathsf E(Y)\big)\Big) &\text{by independence} \\[1ex] &=~\mathsf E\big(X-\mathsf E(X)\big)~\mathsf E\big(Y-\mathsf E(Y)\big) \\[1ex] &=~\big(\mathsf E(X)-\mathsf E(X)\big)~\big(\mathsf E(Y)-\mathsf E(Y)\big) \\[1ex] &=~0 \end{align}$$


In general, when $X,Y$ are independent and $g,h$ are monovariate functions. $$\mathsf E(g(X)\cdot h(Y))=\mathsf E(g(X))\cdot\mathsf E(h(Y))$$