The probability of $n$ consecutive successful trials before $m$ consecutive failures

58 Views Asked by At

Q4. Assume that you play game of repeated independent trials. Each trial has probability of success $p$ and probability of failure $q=1-p$. You win the game if you obtain $n$ consecutive successes before obtaining $m$ consecutive failures. Compute the probability to win. [Hint: Use the LTP by conditioning on the outcome of the first trial. Then apply the LTP again on the conditioned probabilities involved (consider conditioning on $n-1$ consecutive successes and/or $m-1$ consecutive failures]. Answer. Define the following events:

  • $E=\{n$ consecutive successes before $m$ consecutive failures $\}$
  • $S_i=\{$ success in the $i$-th trial $\}$
  • $S_{i: j}=\{$ consecutive successes in trials from $i$ to $j\}$, i.e. $S_{i: j}=S_i \cap S_{i+1} \cap \ldots \cap S_j$.
  • $F_{i: j}=\{$ consecutive failures in trials from $i$ to $j\}$, i.e. $F_{i: j}=\overline{S_i} \cap \overline{S_{i+1}} \cap \ldots \cap \overline{S_j}$. We apply the LTP using the partition for the first trial $\left\{S_1, \overline{S_1}\right\}$, $$ \begin{aligned} \mathbb{P}(E) & =\mathbb{P}\left(E \mid S_1\right) \mathbb{P}\left(S_1\right)+\mathbb{P}\left(E \mid \overline{S_1}\right) \mathbb{P}\left(\overline{S_1}\right) \\ & =p \cdot \mathbb{P}\left(E \mid S_1\right)+q \cdot \mathbb{P}\left(E \mid \overline{S_1}\right) . \end{aligned} $$

Now for $\mathbb{P}\left(E \mid S_1\right)$ we apply the LTP again by conditioning on whether all trials $2,3, \ldots, n$ were also successes or not, i.e. $$ \begin{aligned} \mathbb{P}\left(E \mid S_1\right) & =\mathbb{P}\left(E \mid S_1 \cap S_{2: n}\right) \mathbb{P}\left(S_{2: n}\right)+\mathbb{P}\left(E \mid S_1 \cap \overline{S_{2: n}}\right) \mathbb{P}\left(\overline{S_{2: n}}\right) \\ & =1 \cdot p^{n-1}+\left(1-p^{n-1}\right) \mathbb{P}\left(E \mid S_1 \cap \overline{S_{2: n}}\right), \end{aligned} $$ where we used that $\mathbb{P}\left(E \mid S_1 \cap S_{2: n}\right)=1$ as in this case the first $n$ trials were successes and hence we win and also we used the independent trials assumptions to get $\mathbb{P}\left(S_{2: n}\right)=p^n$. It remains to obtain a more helpful expression for $\mathbb{P}\left(E \mid S_1 \cap \overline{S_{2: n}}\right)$. The event $\overline{S_{2: n}}$ implies that there was at least a failure among trials 2 to $n$. Hence, the probability to win is the same as if we restart the process and the first trial is a failure. This means that $\mathbb{P}\left(E \mid S_1 \cap \overline{S_{2: n}}\right)=\mathbb{P}\left(E \mid \overline{S_1}\right)$ and (2) becomes $$ \mathbb{P}\left(E \mid S_1\right)=p^{n-1}+\left(1-p^{n-1}\right) \mathbb{P}\left(E \mid \overline{S_1}\right) $$

For this questions it states that $\mathbb{P}\left(E \mid S_1 \cap \overline{S_{2: n}}\right)=\mathbb{P}\left(E \mid \overline{S_1}\right)$. I'm having some trouble understanding this. For example, what if there is a failure at the n-1 trial, wouldn't $$\mathbb{P}\left(E \mid S_1 \cap \overline{S_{2: n}}\right)=\mathbb{P}\left(E \mid S_1\right)$$ as we're essentially starting over again with 1 successful trial?

1

There are 1 best solutions below

0
On

Maybe an example can help.

If e.g. $n=4$ and a success is identified with tossing a head then: $$S_1\cap\overline{S_{2:n}}=S_1\cap\overline{S_{2:4}}=HT\cup HHT\cup HHHT$$

where e.g. $HHT$ is a notation for the event that the first two tosses are heads and the third is a tail.

Note that e.g. $S_2$ is the event $HH\cup TH$.

If we are only interested in the probability of winning then starting from one of these three positions is "the same" as starting from $T$.

This fact can be written as:$$\mathbb P(E\mid S_1\cap\overline{S_{2:4}})=\mathbb P(E\mid S_1)$$