Checking my logic for a discrete Random Variable problem from Hoel Port & Stone, Intro Prob. Ch 3 #16

54 Views Asked by At

From Hoel, Port and Stone, Intro to Probability Theory, Chapter 3, exercise #16:

Let X and Y be independent random variables having geometric densities with parameters $p_1$ and $p_2$ respectively.

Find:
a) $P(X \ge Y)$
b) $P(X = Y)$


First the definitions of the two random variables,

$P(X = x) = \begin{cases} p_1(1-p_1)^x, & x = 0, 1, 2,... \\\\ 0, & \text{otherwise} \end{cases}$

similarly,

$P(Y = y) = \begin{cases} p_2(1-p_2)^y, & y = 0, 1, 2,... \\\\ 0, & \text{otherwise} \end{cases}$

In trying to work part (a) out I sketched the following for myself,

Say,
$y = 0$, then $P(X \ge 0) = 1$ and $P(Y = 0) = p_2(1 - p_2)^0 = p_2$, and
$y = 1$, then $P(X \ge 1) = (1 - p_1)^1$ and $P(Y = 1) = p_2(1 - p_2)^1$, and
$y = 2$, then $P(X \ge 2) = (1 - p_1)^2$ and $P(Y = 2) = p_2(1 - p_2)^2$,
etc...

So I figured I'd somehow need to arrive at this $$P(X \ge Y) = \sum_{y=0}^\infty (1-p_1)^yp_2(1-p_2)^y$$

The following is what I came up with, and what I'd like to check with you folks please. Checking if it holds up and whether the thinking is correct, or if I'm misusing notation etc. [I'm not taking any courses, this is just for fun for myself. Thanks!]

$$\begin{align} P(X \ge Y) & = P(\bigcup_{y=0}^\infty (\{X \ge y\} \cap \{Y = y\})) \\\\ & = \sum_{y=0}^\infty P(\{X \ge y\} \cap \{Y = y\}) \quad\text{(*)} \\\\ & = \sum_{y=0}^\infty P(X \ge y)P(Y = y) \\\\ & = \sum_{y=0}^\infty (1-p_1)^yp_2(1-p_2)^y \\\\ & = p_2 \sum_{y=0}^\infty (1 - (p_1 + p_2 - p_1p_2))^y \\\\ & = \frac{p_2}{p_1 + p_2 - p_1p_2} \\\\ \end{align}$$

This seems like it might be OK? But one thing that's bugging me is that (*) line up there.

I figured I could consider the intersection set $\{X \ge y\} \cap \{Y = y\}$ disjoint for each $y$ is because that intersection evaluates to $\{y\}$, which obviously is disjoint for each $y$. But something doesn't feel right about it, maybe it's ok.

Thoughts? Otherwise OK? Thanks for your kind attention!

(Part (b) I assume is reasoned out similarly, skipping it for this discussion.)

2

There are 2 best solutions below

0
On BEST ANSWER

I've been thinking more about my initial post above. I think the first two lines of my solution, that is:

\begin{align} P(X \ge Y) & = P(\bigcup_{y=0}^\infty (\{X \ge y\} \cap \{Y = y\})) \\\\ & = \sum_{y=0}^\infty P(\{X \ge y\} \cap \{Y = y\}) \\\\ & = etc \\\\ \end{align}

...are nonsense. Because $P(X = x)$ and $P(Y = y)$ are short forms for $P(X = x) = P(\{\omega_1: X(\omega_1) = x\})$ and $P(Y = y) = P(\{\omega_2: Y(\omega_2) = y\})$, where $\omega_1$ and $\omega_2$ could be from totally different probability spaces, say $(\Omega_1, \mathscr{A}_1, P_1)$ and $(\Omega_2, \mathscr{A}_2, P_2)$ so to take the intersection between those sets is meaningless, or at least resulting in a bunch of empty sets.

Looking back into the text (H.P & S) it's finally sinking in that I simply need to use a joint-density function (plus there's an example that's almost exactly like problem #16 on page 65, d'oh!) Since the two random-variables, $X$ and $Y$ are independent I can just write it up like so:

\begin{align} P(X \ge Y) & = \sum_{y=0}^\infty P(X \ge Y, Y = y) & \text{(1)} \\\\ & = \sum_{y=0}^\infty P(X \ge y, Y = y) & \text{(2)} \\\\ & = \sum_{y=0}^\infty P(X \ge y)P(Y = y) & \text{(3)} \\\\ & = \sum_{y=0}^\infty (1-p_1)^yp_2(1-p_2)^y \\\\ & = p_2 \sum_{y=0}^\infty (1 - (p_1 + p_2 - p_1p_2))^y \\\\ & = \frac{p_2}{p_1 + p_2 - p_1p_2} \\\\ \end{align}

The example in the H.P. & S. textbook on page 65, doesn't explicitly state why we can jump straight to line (1) above, but I believe it's because of the sequence of events.

$$ \{E_y\}_{y=0}^\infty, \quad E_y = \{(X = y, Y = y), (X = y+1, Y = y), (X = y+2, Y = y), ...\}, $$

are mutually disjoint sets of random vectors; i.e., if $y_1 \ne y_2$ are distinct nonnegative integers, then the intersection

$$ E_{y_1} \cap E_{y_2} = \emptyset. $$

Then to complete answering my own question, line (2) is simply restating the inequality in terms of the value $y$, then line (3) follows because $X$ and $Y$ are independent random variables.

Thanks everyone for taking the time to help above, it's very nice to have this resource here at Stack Exchange to help de-confuse myself. Cheers all!

1
On

The step $(*)$ follows from the fact that the sequence of events $$\{E_y\}_{y=0}^\infty, \quad E_y = (X \ge y) \cap (Y = y)$$ are mutually disjoint; i.e., if $y_1 \ne y_2$ are distinct nonnegative integers, then the intersection $$E_{y_1} \cap E_{y_2} = \emptyset.$$ This is because $(Y = y_1) \cap (Y = y_2) = \emptyset$ whenever $y_1 \ne y_2$. What you have done is partitioned the event $X \ge Y$ into an infinite sequence of disjoint events $\{E_0, E_1, \ldots \}$ such that their union is the event $X \ge Y$, but their pairwise intersections are all empty. So then the total probability is equal to the sum of the individual probabilities of each event; i.e., $$\Pr[E_0 \cup E_1 \cup E_2 \ldots ] = \Pr[E_0] + \Pr[E_1] + \cdots.$$

The subsequent step then uses the fact that $X$ and $Y$ are independent random variables, hence for each $y \in \{0, 1, 2, \ldots\}$, $$\Pr[E_y] = \Pr[(X \ge y) \cap (Y = y)] \overset{\text{ind}}{=} \Pr[X \ge y]\Pr[Y = y].$$