I am presenting a proof of a statement made in an example in the book A First Course in Probability by Sheldon Ross . Could someone please check my proof for its correctness .This example comes from Chapter 3, Section 3.5, Example 5C, Page 95. The example is given a solution but there is a part that I don't really understand, so I tried proving it.The original question is as follow:
Independent trials, each resulting in a success with probability $p$ or a failure with probability $q = (1-p)$, are performed. We are interested in computing the probability that a run of $n$ consecutive successes occurs and before a run of $m$ consecutive failures. We are given the solution as follow:
Solution.: Let E be the event that a run of $n$ consecutive successes occurs before a run of $m$ consecutive failures. To obtain P(E), we start by conditioning on the outcomes of the first trial. That is letting H denote the event that the first trial results in a success, we obtain $$P(E) = pP(E|H)+qP(E|\bar H)$$ Now, given that the first trial was successful, one way we can get a run of $n$ successes before a run of $m$ failures would be to have the next $n-1$ trials all result in successes. So, let us condition on whether or not that occurs. That is, letting F be the event that trials 2 through n all are successes, we obtain $$P(E|H) = P(E|FH)P(F|H)+ P(E|\bar{F}H)P(\bar{F}|H)$$ On the one hand, clearly, P(E|FH) = 1; on the other hand, if the event $\bar{F}H$ occurs, then the first trial would result in a success, but there would be a failure some time during the next $n-1$ trials. However, when this failure occurs, it would wipe out all of the previous successes, and the situation would be exactly as if we started out with a failure. Hence, $$P(E|\bar{F}H) = P(E|\bar{H})$$
Pause !! This is the part where I don't really understand. I agree that when this failure occurs, it would wipe out all of the previous successes. But how is the situation the same as if we started with a failure, especially how it is independent of when failure occurred in $\bar{F}$.
P.S : The text for this question was taken from an already asked question , but I am posting it as a new question as the original question has been inactive since a long time.
Original Question : https://math.stackexchange.com/posts/2283113/edit
Problem statement : We need to calculate $P(E|\bar{F}H)$ . $\bar{F}$ runs from trails $2 \to n$, a total of $n-1$ trails.
The event $\bar{F}$ can be seen in another way. Let us denote $\bar{F_{i}}$ as the last spot or last trail of the $(n-1)$ trails where a failure occurred . Say $\bar{F_{3}}=$ Last failure occurred on trail 3 and from then all successes occurred till trail $n$.
Properties of $\bar{F_{i}}$ :
1.All $\bar{F_{i}}$ are mutually exclusive as only one last event an occur ,two last events by definition cannot occur . $(\bar{F_{i}}\cap\bar{F_{j}})_{i \neq j} = \emptyset $
2.Also if $\bar{F}$ is true as given , then one of the $\bar{F_{i}}$ must occur , then $\bigcup\limits_{i=2}^{n} \bar{F_{i}}$ is mutually exhaustive.
3.Also any $\bar{F}$ can be replaced with an $\bar{F_{i}}$, since the number of events occurring are less than $n$ (therefore the event $E$ cannot occur) and when ever a failure occurs in $\bar{F}$ , the entire experiment resets from there. Thus when a failure occurs all the previous successes or failures don't matter.
4.When ever an $\bar{F_{i}}$ occurs the experiment restarts from there , as if starting with a failure . That is the probability again becomes $P(E|\bar{H})$
Therefore now
$P(E|\bar{F}H)=(\bar{F_2}\;occurred\;and\;restart\_experiment)\;or\;(\bar{F_3}\;occurred\;and\;restart\_experiment)...$
$or(\;\bar{F_n}\;occurred\;and\;restart\_experiment) $
$P(E|\bar{F}H)=\sum_{i=2}^{n}P(E|\bar{H}).P(\bar{F_i})=P(E|\bar{H})\sum_{i=2}^{n}P(\bar{F_i})$.
Also $P(\bar{F_i})=\frac{1}{n-1}$, that implies $\sum_{i=2}^{n}P(\bar{F_i})=\frac{1}{n-1}\sum_{i=2}^{n}(1)=1$
Therefore $P(E|\bar{F}H)=P(E|\bar{H})$.
Is this proof correct , if not can someone explain why $P(E|\bar{F}H)=P(E|\bar{H})$. Thanks