probability of normally distributed variable being greater then another normally distributed variable

96 Views Asked by At

i have seen this question being addressed around, but I have problem with deriving the proof. Namely, if we have two normally distributed variables, $x$ and $y$, with their distributions given as $p_x(x)$ and $p_y(y)$, then for arbitrary $n$, the probability of x being less than n and y being greater than n we have:

$P(x\leq n \wedge y>n) = \int_{-\infty}^n f_x(x)dx \int_n^\infty f_y(y)dy$

If this is correct, which I think is, then for all possible numbers, the probability $P(x<y)$ should be:

$P(x<y) = \int_{-\infty}^\infty \int_{-\infty}^n f_x(x)dx \int_n^\infty f_y(y)dy dn $

Well, now I have problem to get the same result as it was pointed out for example here: http://www.johndcook.com/blog/2008/07/26/random-inequalities-ii-analytical-results/

I hope somebody can explain me where am I wrong.

Best

2

There are 2 best solutions below

0
On

When $n$ is negative a reversal of the inequality $\frac{x}{y}<1$ may happen.

Have a look at the Wikipedia page for the ratio distribution.

8
On

If $X\sim N\left(\mu_{X},\sigma_{X}^{2}\right)$ and $Y\sim N\left(\mu_{Y},\sigma_{Y}^{2}\right)$ and are independent then: $$Z=X-Y\sim N\left(\mu_{X}-\mu_{Y},\sigma_{X}^{2}+\sigma_{Y}^{2}\right)$$ and: $$P\left\{ X<Y\right\} =P\left\{ Z<0\right\} =\cdots $$


edit:

Let $g\left(x,y\right)$ be prescribed by $\left(x,y\right)\mapsto1$ if $x<y$ and $\left(x,y\right)\mapsto0$ otherwise.

$$P\left\{ X<Y\right\} =\mathbb{E}g\left(X,Y\right)=\int\int g\left(x,y\right)f_{X}\left(x\right)f_{Y}\left(y\right)dxdy=\int_{-\infty}^{\infty}\int_{-\infty}^{y}f_{X}\left(x\right)f_{Y}\left(y\right)dxdy=\int_{-\infty}^{\infty}f_{Y}\left(y\right)\int_{-\infty}^{y}f_{X}\left(x\right)dxdy$$