Determine the probability distribution of a ratio of two random variables?

4.1k Views Asked by At

Setting

You are given two independent random variables $X_0,X_1$ with common exponential density $f(x) = \alpha e^{-\alpha x}$. Let $R = \frac{X_o}{X_1}$. Determine $\Pr[R > t]$ for $t > 0$.

I got up to here

$$\Pr[R > t] = \Pr[X_o/X_1 > t] = \Pr[X_o > X_1 t] = 1 - \Pr[X_o \le tX_1]$$

I know how to express the last probability as a distribution, but it would have $X_1$ in it, making it a random variable. So how do I proceed?

2

There are 2 best solutions below

3
On

One way to do this is to rescale: let $Y = t X_1$, so $X_0/X_1 > t$ is equivalent to $X_0 > Y$. Now $X_0$ and $Y$ are independent exponential random variables with rate parameters $\alpha$ and $\alpha/t$ respectively. Think of two independent Poisson processes with these rates. One way to realize this is to have a combined Poisson process with rate $\alpha + \alpha/t$, and assign each occurrence to the $X_0$ or the $Y$ process with probabilities $\dfrac{\alpha}{\alpha + 1/t} = \dfrac{t}{t+1}$ and $\dfrac{\alpha/t}{\alpha + \alpha/t} = \dfrac{1}{t+1}$ respectively. The probability that the next occurrence is from the $Y$ process, i.e. that $t X_1 = Y < X_0$, i.e. that $R = X_0/X_1 > t$, is then $\dfrac{1}{t+1}$.

0
On

Suppose we are interested in computing the probability of some event defined by a finite collection of continuous random variables. The joint density function of these random variables, when integrated over appropriate regions (defined by this event) in its domain, gives the probability we are looking for. In the present case, the independence of the identically distributed exponential random variables, $X_0$ and $X_1$, implies that their joint density function factors into the product of the marginal density functions, $$ f_{X_0, X_1}\left(x_0, x_1\right){}={}f_{X_0}\left(x_0\right)f_{ X_1}\left(x_1\right){}={}\alpha^2e^{-\alpha(x_0 + x_1)}\,. $$ So, to compute the probability of the event $\displaystyle \left\{\frac{X_0}{X_1}>t\right\}$, which makes sense in what follows since the random variables are positive valued (except on the zero-probability set $\left\{X \leq 0\right\}$), we compute the integral of the joint density over the region for which this event holds in the domain of the joint density. That is, $$ \begin{eqnarray*} P\left( \frac{X_0}{X_1}>t \right)&{}={}&P\left(X_0 > tX_1 \right)\newline &{}={}&\iint\limits_{X_0>tX_1} f_{X_0, \, X_1}\left(x_0, x_1\right)dA_{x_0,\,x_1}\newline &{}={}& \int\limits_0^\infty \int\limits_{x_1 t}^\infty f_{X_0}\left(x_0\right) f_{ X_1}\left(x_1\right) dx_0 dx_1\newline &{}={}& \int\limits_0^\infty \left(\,\, \int\limits_{x_1 t}^\infty f_{X_0}\left(x_0\right)dx_0 \right) f_{ X_1}\left(x_1\right) dx_1\newline &{}={}& \int\limits_0^\infty \left( e^{-tx_1\alpha}\right)\alpha e^{-x_1\alpha}dx_1\newline &{}={}& \frac{1}{1 + t}\,. \end{eqnarray*} $$