Question about Variances

60 Views Asked by At

I had a question because I'm currently working on a Homework Question, and seem to have reached a point where I think I may be right but am not sure. So let $X$ ~ $N(0,1)$, and similarly $Y$ ~ $N(0,1)$, and $X$ and $Y$ are independent. Then let: $$U = \frac{X+Y}{\sqrt2}, V = \frac{X-Y}{\sqrt2}$$

Then I am supposed to prove that $U, V$ are independent and both have the standard normal distribution. Now, I know by showing by the MGF's, one can see this right away. But my logic is this.

I know that the sum of two Gaussian RV's multiplied by a constant is also Gaussian. Further, I know that:

$\mathbb{E}[U] = 0$ by linearity, and by independence, $Var(U) = \frac{1}{2}(Var(X)+Var(Y)) = 1$

Now, for $V$, I also know that $\mathbb{E}[V]$ is $0$ by linearity of Expectation. But, for Variance, is it correct to assume:

$$Var(V) = \frac{1}{2}(Var(X+(-Y))) = \frac{1}{2}(Var(X)+Var(-Y)) = \frac{1}{2}(Var(X) + Var(Y))$$

This is the only way this would make sense to show that $Var(V) = 1$. Thanks for your help.

2

There are 2 best solutions below

3
On BEST ANSWER

One easy way would be to note that( X,Y) follows standard bivariate normal distribution and the transformation matrix that takes (X,Y) to (U,V) is orthogonal. So (U,V) will also follow standard bivariate normal. Hence done.

0
On

Your transformation from $(X,Y)$ to $(U,V)$ is a rotation about the origin. By the spherical symmetry of the bivariate normal distribution of $(X,Y)$, the distribution of $(U,V)$ will still be bivariate normal.

If I understand your argument, it seems you have argued correctly that $U$ and $V$ are standard normal, but I don't see how you have shown independence.

To be sure, you need to do the transformation, get the joint distribution of $(U,V)$ and see that the joint PDF factors to imply $U$ and $V$ are independent.

In case you have any doubt about the truth of the result, the brief simulation below in R statistical software illustrates (without proof, of course) that $U$ and $V$ are uncorrelated. For normal variables, zero correlation implies independence.

Note that both output vectors are essentially $(0, 0, 1, 1, 0),$ for two means, two SDs, and correlation, respectively. (A million observations should give 2 or 3 place accuracy.)

m = 10^6;  x = rnorm(m);  y = rnorm(m)
c(mean(x), mean(y), sd(x), sd(y), cor(x,y))
##  4.379105e-05 -7.171200e-04  1.000159e+00  9.999017e-01 -5.445308e-04
u = (x+y)/sqrt(2);  v = (x-y)/sqrt(2)
c(mean(u), mean(v), sd(u), sd(v), cor(u,v))
## -0.0004761154  0.0005380453  0.9997578890  1.0003024362  0.0002569178

Orthogonality is crucial: $S = (2X + Y)/\sqrt{5}$ and $T = (2X - Y)/\sqrt{5}$ are both standard normal, but they are not independent. Note the non-zero correlation:

s = (2*x + y)/sqrt(5);  t = (2*x - y)/sqrt(5) 
c(mean(s), mean(t), sd(s), sd(t), cor(s,t))
## -0.0002815379  0.0003598737  0.9998894485  1.0003250527  0.6001644590

Plots of 50,000 realizations each:

enter image description here