Prove that two random variables are independent distributed normally

379 Views Asked by At

Let $M$ and $N$ are independent random variables distributed Uniform$[0, 1]$.

Define $(M_n)_{n\geq 1}$ and $(N_n)_{n\geq 1}$ which are two independent sequences of iid random variables distributed uniformly over $[−1, 1]$. Let $Z = \inf \{ n ≥ 1, 0 < M^2_n + N^2_n< 1 \}$ and

$X = M_Z \sqrt{\frac{-2 \log (M_Z^2 + N_Z^2)}{M_Z^2 + N_Z^2}}~~$ and $Y = N_Z \sqrt{\frac{-2 \log (M_Z^2 + N_Z^2)}{M_Z^2 + N_Z^2}}$

What is the distribution of $Z$? show that $X$ and $Y$ are two independant random variables distributed $N(0,1)$.

Addition A previous required question related to this exercise is asking to show that $X$ and $Y$ are independent random variables distributed $N(0, 1)$, knowing that:

$ X = \sqrt{-2 \log(M)} \cos(2 \pi N)$ and $Y = \sqrt{-2 \log(M)} \sin(2 \pi N)$

I already proved this part using the change of variables transformation. Then to show that they are independent, the joint density of $X,Y$ can factor into separate densities of $X$ and $Y$.

3

There are 3 best solutions below

4
On BEST ANSWER

$M, N$ are uniformly distributed on the unit disk, so $f_{M,N}(m,n) = \pi^{-1}$ in the unit disk and zero outside. We have new variables $X,Y$ expressed in terms of $M,N.$ Recall the Change of Variables theorem, which states that if we have some bijective function $g : (x,y) \ \mapsto \ (m, n)$ with continuous partial derivatives, then the joint density of $X,Y$ is given by

$$f_{X,Y}(x,y) = f_{M,N}( g_1(x,y), g_2(x,y) ) \cdot |J_g|$$

where $J_g$ is the determinant of the Jacobian matrix of $g,$ and $|J_g|$ is the absolute value of that.

For this specific question, rearranging the defining equations for $X,Y,$ gives $$g(x,y) = h(x,y)\begin{pmatrix}x\\y\end{pmatrix}, \ \ \text{ where }\ h(x,y) = \frac{\exp \left(-\frac{x^2+y^2}{4}\right)}{\sqrt{x^2+y^2}}$$

One could start routinely calculating the partial derivatives and simplifying the resulting terms to compute $J_g,$ but the computation is considerably easier if we continue with the special form that $g$ has. \begin{align} J_g &= \frac{ \partial g_1}{\partial x} \frac{ \partial g_2}{\partial y} - \frac{\partial g_1}{\partial y} \frac{\partial g_2}{\partial x}\\ &=\left(h + x \frac{\partial h}{\partial x}\right)\left(h + y \frac{\partial h}{\partial y}\right) -\left(x \frac{\partial h}{\partial y}\right)\left(y \frac{\partial h}{\partial x}\right)\\ &=h^2 + \left( x \frac{\partial h}{\partial x} + y \frac{\partial h}{\partial y}\right)h\\ &= h^2 \left( 1 + \bigg\langle \nabla \log h \ , \begin{pmatrix}x\\y\end{pmatrix} \bigg\rangle \right) \end{align}

We have $\displaystyle\nabla \log h(x,y) = \left( \frac{-1}{x^2+y^2} - \frac{1}{2} \right) \begin{pmatrix}x\\y\end{pmatrix},$ so $\displaystyle 1 + \bigg\langle \nabla \log h \ , \begin{pmatrix}x\\y\end{pmatrix} \bigg\rangle = - \frac{x^2+y^2}{2}.$

Thus, $$J_g = - \frac{1}{2} \exp \left( - \frac{x^2+y^2}{2}\right)$$

and $$f_{X,Y}(x,y) = \frac{1}{2\pi} \exp \left( - \frac{x^2+y^2}{2}\right)$$

Therefore, the joint distribution of $X$ and $Y$ is that of the standard bivariate normal distribution, which implies that $X$ and $Y$ are independent identically distributed standard normal variables.

0
On

From the definitions of the variables, $(M_Z,N_Z)$ are the coordinates of a point chosen from the unit disk with uniform distribution over the area of the disk. (Specifically, these points are chosen by a rejection method: generate points uniformly in the circumscribed square until one lands in the disk, and output that point.)

There is a formula for converting the coordinates of a point chosen uniformly in the unit disk into a pair of normal random variables. I think you’re looking at some version of that formula in the definitions of $X$ and $Y$.

Your earlier exercise seems to have been another version of this formula. So if you can prove that $(M_Z,N_Z)$ is selected uniformly from the unit disk and if you can relate a random point like that to your earlier exercise, I think you can prove what you need to prove.

0
On

For the distribution of Z, we first note using a geometric argument (area of circle to area of square) that $p(M_n^2+N_n^2<1)=\frac{\pi}{4}$. Then for Z to take the value n, $(M_r,N_r)$ must fall outside the circle for $r<n$ then inside for $r=n$, so $$p(Z=n)=\left(1-\frac{\pi}{4}\right)^{n-1}\frac{\pi}{4}.$$ For the second part, it looks fruitful to consider the random variable $T=e^{-\frac12(X^2+Y^2)}=e^{\log(M_Z^2+N_Z^2)}=M_Z^2+N_Z^2$. Clearly, T can only take values between 0 and 1. For such a value t, we sum over all the values of Z, $$p(T<t)=\sum_{n=1}^\infty \left(1-\frac{\pi}{4}\right)^{n-1}p(M_n^2+N_n^2<t)$$ $$=\sum_{n=1}^\infty \left(1-\frac{\pi}{4}\right)^{n-1}\frac{\pi t}{4}$$ $$=t$$ (sum geometric series and simplify).

So $$p(X^2+Y^2<w^2)=p(-\frac12(X^2+Y^2)>-\frac12w^2)=1-p(-\frac12(X^2+Y^2)<-\frac12w^2)$$ $$p(X^2+Y^2<w^2)=1-p(e^{-\frac12(X^2+Y^2)}<e^{-\frac12w^2})=1-p(T<e^{-\frac12w^2})=1-e^{-\frac12w^2}$$

But $$p(X^2+Y^2<w^2)=1-e^{-\frac12w^2}$$ occurs if X and Y are independent and $N(0,1)$. So this suggests we are on the right track, but does not complete the proof.