What does $cov(x_1,x_2) >> 0, cov(y_1, y_2) >> 0$ and $cov(x_1+y_1, x_2+y_2) = 0$ tell us about $x_1, x_2, y_1, y_2$?

21 Views Asked by At

I was posed this problem where we know: $$ cov(x_1,x_2) >> 0 \\ cov(y_1, y_2) >> 0 \\ cov(x_1+y_1, x_2+y_2) = 0 \\ $$

What does this tell us about the structure of $x_1, x_2, y_1, y_2$?

Is there any significance to the $>>$ here? I don't think $>>$ tells us any additional detail than what $>$ would tell us since covariance isn't normalized. The magnitude of covariance depends on the domain of the random variables, so whether the covariance is much greater than zero or just slightly greater than zero doesn't give us any additional information. On the other hand, if it was instead said that correlation is much greater than zero, which means it's closer and closer to 1, then this could tell us that the 2 variables have a near-perfect linear relationship.

So the best that I think I can say about this is that $x_1$ and $x_2$ are positively correlated and have some degree of linearity, but the strength of their correlation isn't clear. Same can be said for $y_1$ and $y_2$.

$cov(x_1+y_1, x_2+y_2) = 0$ tells us that the newly formed random variables $x_1+y_1$ and $x_2+y_2$ are uncorrelated. They are linearly independent, but not necessarily independent. The way the problem was posed to me seems to be suggesting that $cov(x_1+y_1, x_2+y_2) = 0$ should tell us something about $x_1, x_2, y_1, y_2$, but I can't seem to think what it could be telling us. Any hints?