Independence of Random Variables in a simulation of Standard Normal RV

60 Views Asked by At

I was going through the book Introduction to Probability Models by Sheldon M. Ross and I didn't understand a remark about the simulation of a standard random variable where the steps are :

Step 1: Generate $Y_{1}$, an exponential random variable with rate 1.

Step 2: Generate $Y_{2}$, an exponential with rate 1.

Step 3: If $Y_{2}-\left(Y_{1}-1\right)^{2} / 2>0$, set $Y=Y_{2}-\left(Y_{1}-1\right)^{2} / 2$ and go to step 4. Otherwise go to step 1 .

Step 4: Generate a random number $U$ and set $$ Z=\left\{\begin{aligned} Y_{1}, & \text { if } U \leqslant \frac{1}{2} \\ -Y_{1}, & \text { if } U>\frac{1}{2} \end{aligned}\right. $$

Indeed, they are saying that the random variable Z and Y when $Y_{1}$ is accepted are independent. (Z is standard normal and Y is exponential with rate 1). I don't see how they make this assumption.

Thank you for your help.

2

There are 2 best solutions below

0
On

The variable $Y$ as defined is the amount by which $Y_2$ exceeds the random variable $(Y_1-1)^2/2$, and we are conditioning on the event that this difference is positive. We show that conditional on this event, the difference $Y_2-(Y_1-1)^2/2$ is independent of $Y_1$ (hence the difference is independent of $Z$, which is a signed version of $Y_1$).

This conditional independence is a consequence of the memoryless property of the exponential distribution, specifically the following generalization:

Claim: Let $X$ and $Y$ be independent random variables, where $Y$ has exponential distribution with rate $\lambda$ and $X$ is nonnegative with density $f(x)$. Then given the event $Y>W$, where $W$ is a nonnegative function $h(X)$ of $X$, we have $Y-W$ and $X$ are conditionally independent, and the conditional distribution of $Y-W$ is exponential with rate $\lambda$.

Proof: Let $a>0$ and $b>0$. We want to evaluate the conditional probability $$P(Y-W>b, X>a\mid Y>W)=\frac{P(Y-W>b, X>a, Y>W)}{P(Y>W)}.\tag1$$ The numerator of (1) simplifies to $$\begin{align} P(Y-W>b, X>a) &=\int_{x=a}^\infty \int_{y=b+h(x)}^\infty f(x)\lambda e^{-\lambda y}\,dy\,dx\\ &=\int_{x=a}^\infty e^{-\lambda(b+h(x))}f(x)\,dx =e^{-\lambda b}\int_{x=a}^\infty e^{-\lambda h(x)}f(x)\,dx. \end{align}\tag2 $$ Put $a=b=0$ in (2) to obtain $$P(Y>W)=\int_{x=0}^\infty e^{-\lambda h(x)}f(x)\,dx.\tag3$$

Dividing (2) by (3) then gives our final form for (1): $$P(Y-W>b, X>a\mid Y>W)=e^{-\lambda b}\frac{\int_{x=a}^\infty e^{-\lambda h(x)}f(x)\,dx}{\int_{x=0}^\infty e^{-\lambda h(x)}f(x)\,dx}\tag4 $$ By setting $a=0$ in (4), and then setting $b=0$ in (4), the desired factorization follows: $$P(Y-W>b,X>a\mid Y>W)=\underbrace{P(Y-W>b\mid Y>W)}_{e^{-\lambda b}}P(X>a\mid Y>W). $$


In your context, $\lambda=1$, $X$ has density $f(x):=e^{-x}$, and $h(x):=(x-1)^2/2$. You can compute the conditional distribution of $X$ given $Y>h(X)$ as follows: $$ P(X>a\mid Y>h(X))=\frac{\int_{x=a}^\infty e^{-\lambda h(x)}f(x)\,dx}{\int_{x=0}^\infty e^{-\lambda h(x)}f(x)\,dx}$$ with numerator $$\int_{x=a}^\infty e^{-\lambda h(x)}f(x)\,dx=\int_a^\infty e^{-(x-1)^2/2}e^{-x}\,dx=e^{-1/2}\int_a^\infty e^{-x^2/2}dx.\tag5 $$ The denominator is a constant obtained with $a=0$ in (5): $$\int_0^\infty e^{-\lambda h(x)}f(x)\,dx=e^{-1/2}\int_a^\infty e^{-x^2/2}dx=e^{-1/2}\frac{\sqrt{2\pi}}2,\tag6 $$ and therefore the conditional density of $X$ is the absolute value $|Z|$ of the standard gaussian, with density $$\frac2{\sqrt{2\pi}}e^{-x^2/2},\qquad x>0.$$ (Step (6) isn't necessary; from (5) we see that the density of $X$ is proportional to that of $|Z|$.)

0
On

Define $f:\mathbb{R}^2 \longrightarrow \mathbb{R}$ by $$f(y_1,y_2)=\sqrt{\frac{2e}{\pi}}\exp\left(-y_1-y_2\right)\cdot 1_{\big\{(y_1,y_2)\in [0,\infty)^2:y_2>(y_1-1)^2/2\big\}}$$ Evidently, $f$ is the corresponding conditional pdf of $(Y_1,Y_2)$ with respect to the event that $Y_2-(Y_1-1)^2/2>0$. Fix $y\in [0,\infty)$ and $z\in \mathbb{R}$ arbitrarily, and sample $(Y_1,Y_2)\sim f$. It follows that $$\begin{eqnarray*}\mathbb{P}\left(Y \leq y,Z\leq z\right)&=& \mathbb{P}(Y\le y, Z \leq z,U \leq 1/2) + \mathbb{P}(Y\leq y,Z\leq z,U>1/2) \\ &=& \frac{1}{2}\Bigg[\mathbb{P}(Y \leq y,Y_1 \leq z)+\mathbb{P}(Y\leq y,Y_1\geq -z)\Bigg]\end{eqnarray*}$$ If we assume $z\geq 0$ this becomes with a little integration $$\begin{eqnarray*}\mathbb{P}\left(Y \leq y,Z\leq z\right)&=&\frac{1}{2}\Bigg[\mathbb{P}\left(Y_2 \leq \frac{(Y_1-1)^2}{2}+y,Y_1\in [0,z]\right)+\mathbb{P}\left(Y_2 \leq \frac{(Y_1-1)^2}{2}+y,Y_1 \geq 0\right)\Bigg] \\ &=&\frac{1}{2}\Bigg[\int_0^z \int_{\frac{(y_1-1)^2}{2}}^{\frac{(y_1-1)^2}{2}+y}f(y_1,y_2)\mathrm{d}y_2\mathrm{d}y_1 + \int_0^{\infty} \int_{\frac{(y_1-1)^2}{2}}^{\frac{(y_1-1)^2}{2}+y}f(y_1,y_2)\mathrm{d}y_2\mathrm{d}y_1 \Bigg] \\ &=& \int_{-\infty}^yf_{Y}(t)\mathrm{d}t \times \int_{-\infty}^z f_{Z}(t)\mathrm{d}t\end{eqnarray*}$$ where $f_{Y}(t)=\exp(-t)\cdot 1_{[0,\infty)}$ and $f_{Z}(t)=\frac{1}{\sqrt{2\pi}}\exp\Big\{-\frac{t^2}{2}\Big\}$. On the other hand, if you assume $z<0$, then $$\begin{eqnarray*}\mathbb{P}\left(Y \leq y, Z\leq z\right) &=&\frac{1}{2}\mathbb{P}\left(Y \leq y, Y_1 \geq -z\right) \\ &=& \frac{1}{2}\int_{-z}^{\infty}\int_{\frac{(y_1-1)^2}{2}}^{\frac{(y_1-1)^2}{2}+y}f(y_1,y_2)\mathrm{d}y_2\mathrm{d}y_1 \\ &=& \int_{-\infty}^{y}f_{Y}(t)\mathrm{d}t \times \int_{-\infty}^zf_{Z}(t)\mathrm{d}t \end{eqnarray*}$$