Let $X$, $Y$ i.i.d. $\sim \text{N}(0, 1)$. Find the joint distribution of $(X + Y, X − Y)$

1.6k Views Asked by At

My textbook, Introduction to Probability by Blitzstein and Hwang, gives the following example:

Example 7.5.8 (Independence of sum and difference). Let $X$, $Y$ i.i.d. $\sim \text{N}(0, 1)$. Find the joint distribution of $(X + Y, X − Y)$.

It gives the following solution:

Solution: Since $(X + Y, X − Y)$ is Bivariate Normal and $\text{Cov}(X + Y, X − Y) = \text{Var}(X) − \text{Cov}(X, Y) + \text{Cov}(Y, X) − \text{Var}(Y) = 0, X + Y$ is independent of $X − Y$. Furthermore, they are i.i.d. $\text{N}(0, 2)$. By the same method, we have that if $X \sim N (\mu_1, \sigma^2)$ and $Y \sim \text{N}(\mu_2, \sigma^2)$ are independent (with the same variance), then $X + Y$ is independent of $X − Y$. It can be shown that the independence of the sum and difference is a unique characteristic of the Normal! That is, if $X$ and $Y$ are i.i.d. and $X + Y$ is independent of $X − Y$, then $X$ and $Y$ must have Normal distributions.

But this solution doesn't show how to find the joint distribution of $(X + Y, X - Y)$.

I know that the equation for the conditional PDF is

$$\begin{align} & f_{Y | X}(Y | X) = \dfrac{f_{X,Y} (x, y)}{f_X(x)} \\ &\Rightarrow f_{X, Y}(x, y) = f_{Y|X}(Y | X) f_X(x)\end{align}$$

So then how does one find the joint distribution of $(X + Y, X - Y)$?

I would greatly appreciate it if people could please take the time to show how this is done.

4

There are 4 best solutions below

0
On BEST ANSWER

The book did give you the joint distribution for $(X+Y, X-Y)$, just maybe not explicit enough.

It tells you the distribution of both $X+Y$ and $X-Y$ are i.i.d with $N(0,2)$. By this you should be able to write out their density functions as $$N(0,2) \sim\frac{1}{2\sqrt{\pi}}exp\left(-\frac{x^2}{4}\right).$$

It further says that $X+Y$ and $X-Y$ are independent which means their joint density function is the product of their individual density functions. Let $R\equiv X+Y$ and $S\equiv X - Y$, then their joint distribution is $$ \frac{1}{2\sqrt{\pi}}exp\left(-\frac{r^2}{4}\right) \times \frac{1}{2\sqrt{\pi}}exp\left(-\frac{s^2}{4}\right) = \frac{1}{4\pi}exp\left(-\frac{r^2 + s^2}{4}\right).$$

0
On

$$E[X+Y]=E[X]+E[Y]=0$$ $$E[X-Y]=E[X]-E[Y]=0$$ $$Var[X\pm Y]=Var[X]+Var[Y]=2$$ $$Cov(X+Y, X-Y)=0$$

Hence we have $$\begin{bmatrix}X+Y \\ X-Y\end{bmatrix} \sim N\left( \begin{bmatrix}0 \\ 0\end{bmatrix}, \begin{bmatrix}2 & 0\\ 0 &2\end{bmatrix} \right)$$

Its pdf is

\begin{align}f_{X+Y, X-Y}(p,q)&=\frac1{2\pi}\det\left( \begin{bmatrix}2 & 0\\ 0 &2\end{bmatrix} \right)^{-\frac12}\exp\left( -\frac12\begin{bmatrix}p & q\end{bmatrix}\begin{bmatrix}2 & 0\\ 0 &2\end{bmatrix}^{-1}\begin{bmatrix}p \\ q\end{bmatrix}\right)\\ &=\frac1{2\pi}\cdot \frac12\exp\left(-\frac12\left( \frac12p^2+\frac12q^2 \right) \right) \\ &= \left( \frac1{2\sqrt{\pi}}\exp \left(-\frac14p^2 \right)\right)\left( \frac1{2\sqrt{\pi}}\exp \left(-\frac14q^2 \right)\right)\end{align}

which is a just a product of $2$ normal pdf with mean $0$ and variance $2$.

0
On

You wrote:

Furthermore, they are i.i.d. $\operatorname N(0,2).$

I think perhaps this should have said "i.d." rather than "i.i.d." since the first "i." had already been estaiblished and was not "furthermore." But I wouldn't write it that way (i.e. as "i.d.") but rather I would use words.

You wrote:

But this solution doesn't show how to find the joint distribution of $(X+Y,X−Y).$

However, to say that two random variables are independent and each has a certain distribution does entirely specify their joint distribution.

1
On

Let us rephrase the question. Let $(\Omega,\mathcal{F},P)$ be a probability space. Let $X=(X_{1},X_{2})$, where $X_{1}$, $X_{2}$ are i.i.d. standard normal. Let $Y=(Y_{1},Y_{2})$ be random vectors defined by $Y_{1}=X_{1}+X_{2}$ and $Y_{2}=X_{1}-X_{2}$. Find the distribution of the random vector $Y$.

Solution: Let $\phi:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}$ be defined by $\phi(x,y)=(x+y,x-y)$. Put $(u,v)=\phi(x,y)$. By direct calculation, we have $x=(u+v)/2,$and $y=(u-v)/2$. Hence the inverse $\phi^{-1}$ is given by $\phi^{-1}(u,v)=((u+v)/2,(u-v)/2)$. Let $\mu_{X}$ and $\mu_{Y}$ be the distributions induced by $X$ and $Y$ respectively. That is, $\mu_{X}$ is a probability on $\mathbb{R}^{2}$ defined by $\mu_{X}(B)=P\left(X^{-1}(B)\right)$, $B\in\mathcal{B}(\mathbb{R}^{2})$. Now, let $B\in\mathcal{B}(\mathbb{R}^{2})$, then \begin{eqnarray*} & & \mu_{Y}(B)\\ & = & P\left(Y^{-1}(B)\right)\\ & = & P\left(X^{-1}\phi^{-1}(B)\right)\\ & = & \mu_{X}(\phi^{-1}(B)). \end{eqnarray*}

Observe that $\phi^{-1}$ is a linear transformation, explicitly given by \begin{eqnarray*} \phi^{-1}(u,v) & = & \begin{pmatrix}\frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & -\frac{1}{2} \end{pmatrix}\begin{pmatrix}u\\ v \end{pmatrix}\\ & = & -\frac{\sqrt{2}}{2}\begin{pmatrix}\cos\theta & -\sin\theta\\ \sin\theta & \cos\theta \end{pmatrix}\begin{pmatrix}u\\ v \end{pmatrix}, \end{eqnarray*} where $\theta=-\frac{3\pi}{4}$. That is, $\phi^{-1}$ is rotation by $\frac{3\pi}{4}$ radian about the origin clockwisely, followed by scaling by factor $\frac{\sqrt{2}}{2}$, further followed by reflection about the origin.

Let $f:\mathbb{R}^{2}\rightarrow\mathbb{R}$ be a pdf for the random vector $X$, given by $f(x_{1},x_{2})=\frac{1}{2\pi}\exp(-\frac{1}{2}(x_{1}^{2}+x_{2}^{2}))$.

Let $B\in\mathcal{B}(\mathbb{R}^{2})$ be a Borel set. If $m_{2}(B)=0$, then $\mu_{Y}(B)=\int_{\phi^{-1}(B)}f \, dm_2=0$ because the invertible linear map $\phi^{-1}$ maps a null set to a null set (i.e., $m_2(\phi^{-1}(B))=0$). It follows that $\mu_{Y}\ll m_{2}$ (i.e., $\mu_{Y}$ is absolutely continuous with respect to the Lebesgue measure) and hence $\mu_{Y}$ admits a p.d.f.. Let $g:\mathbb{R}^{2}\rightarrow\mathbb{R}$ be a p.d.f. of $Y$ (which is unique to to a null set). In the following, we go to find out $g$ explicitly.

Fix $(u,v)\in\mathbb{R}^{2}$. Let $B=[u,u+\Delta u]\times[v,v+\Delta v]$ be a rectangle (note that $\Delta u$ and $\Delta v$ may be zero or negative. In this case, $[u,u+\Delta u]$ is interpreted in an obvious way). By the above analysis of $\phi^{-1}$, $\phi^{-1}(B)$ is also a rectangle with $(\frac{u+v}{2},\frac{u-v}{2})$ as one of its vertices. Note that the length and width of $\phi^{-1}(B)$ are both scaled by $\frac{\sqrt{2}}{2}$. Therefore, $m_{2}(\phi^{-1}(B))=\frac{\sqrt{2}}{2}\cdot\frac{\sqrt{2}}{2}\cdot m_{2}(B)=\frac{1}{2}|\Delta u||\Delta v|$, where $m_{2}$ denotes the Lebesgue measure on $\mathbb{R}^{2}$. Note that $f$ is continuous, by mean value theorem of integration, we have \begin{eqnarray*} & & \mu_{Y}(B)\\ & = & \mu_{X}(\phi^{-1}(B))\\ & = & \int_{\phi^{-1}(B)}f(x_{1},x_{2})\,dm_{2}(x_{1},x_{2})\\ & = & m_{2}(\phi^{-1}(B))f(\xi,\eta)\\ & = & \frac{1}{2}|\Delta u||\Delta v|f(\xi,\eta), \end{eqnarray*} for some $(\xi,\eta)\in\phi^{-1}(B)$. Note that $(\xi,\eta)$ depends on $(\Delta u,\Delta v)$. However, it is clear that $(\xi,\eta)\rightarrow(\frac{u+v}{2},\frac{u-v}{2})$ as $(\Delta u,\Delta v)\rightarrow(0,0)$. Now it is clear that $g$ is given by $g(u,v)=\frac{1}{2}f(\frac{u+v}{2},\frac{u-v}{2})=\frac{1}{4\pi}\exp(-\frac{u^2+v^2}{4})$.

/////////////////////////////////////////////////////////////////////////////

Remark: In the above, we can also argue by applying the following change-of-variable theorem:

Let $T:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}$ be an invertible linear map. Let $f:\mathbb{R}^{2}\rightarrow\mathbb{R}$ be an integrable function. Then $$ \int(f\circ T)|\det T|\,dm_2=\int f\,dm_2. $$

/////////////////////////////////////////////////////////////////

For our case, given $B\in\mathcal{B}(\mathbb{R}^{2})$, we have \begin{eqnarray*} \mu_{Y}(B) & = & \int_{\phi^{-1}(B)}f\,dm_2\\ & = & \int\tilde{f} \, dm_2\\ & = & \int\tilde{f}\circ\phi^{-1}\,\,|\det(\phi^{-1})|\,dm_2, \end{eqnarray*} where $\tilde{f}=f1_{\phi^{-1}(B)}$. Note that $1_{\phi^{-1}(B)}\circ\phi^{-1}=1_{B}$. Hence $\tilde{f}\circ\phi^{-1}=(f\circ\phi^{-1})(1_{\phi^{-1}(B)}\circ\phi^{-1})=(f\circ\phi^{-1})1_B$. That is, \begin{eqnarray*} \mu_Y(B) & = & \int_B(f\circ\phi^{-1})|\det(\phi^{-1})|\,\,dm_2. \end{eqnarray*} This explicitly show that the pdf of $Y$ is \begin{eqnarray*} g(u,v) & = & |\det\phi^{-1}|f\circ\phi^{-1}(u,v)\\ & = & \frac{1}{2}f\left(\frac{u+v}{2},\frac{u-v}{2}\right). \end{eqnarray*} In this way, we do not require that $f$ is continuous.