What is the expected number of flips of a fair coin before seeing both $HH$ and $TT$?

214 Views Asked by At

I flip an unbiased coin (The probabilities $p$ of heads and $q$ of tails are both $\frac{1}{2}$.) How many flips until I obtain both HH and TT in any order?

My approach is as follows: Let E be expected number of flips. The possibilities are

A: HH Flips $= 2p^2$

B: HT Flips $= (1+E)pq$

C: TT Flips $= 2q^2$

D: TH Flips $= (1+E)pq$

Equating gives $E = ( A + B + C + D )$. For $p = q = \frac{1}{2}$ the value of $E$ is $3$ flips total.

I want to know if this correct.

Using a similar method I know the number of flips to get HH is $6$ and the number of flips to get TT is also $6$ so my solution of $3$ seems weird. I am not sure if my solution is correct for either HH or TT.

1

There are 1 best solutions below

4
On

This exercise is a typical application of a Markov chain.

We can model our system with $5$ states:

  • $0$: the initial state
  • $A$: at least $1$ toss, and neither $HH$ nor $TT$ has occurred yet
  • $B$: exactly $1$ of $HH$ and $TT$ have occurred, and the last toss matches the pair that has already occurred (e.g., $HH$ has occurred but $TT$ has not, and the last toss was an $H$)
  • $C$: exactly $1$ of $HH$ and $TT$ have occurred, and the last toss matches the pair that has not yet occurred (e.g., $HH$ has occurred but $TT$ has not, and the last toss was an $T$)
  • $1$: both $HH$ and $TT$ have occurred (so, $1$ is the sole absorbing state) .

We can compute the probabilities of transitions between states using the description of the problem. For example, if you toss a coin while in state A,

  • with probability $\frac{1}{2}$ your next toss does not coincide with your previous toss, and so you remain in state $A$, and
  • with probability $\frac{1}{2}$ your next toss coincides with your previous state, and so you move to state $B$.

At this point the problem is routine: We can construct the stochastic transition matrix for the process from the probabilities above (using the state order $0, A, B, C, 1$), which we'll write in a block decomposition with block sizes $4, 1$: $$\left( \begin{array}{c|c} S & {\bf a} \\ \hline \cdot & 1 \end{array} \right)$$ The upper-left $4 \times 4$ minor $S$ records the transition probabilities between transient states, and, for example, our previous observation about the possible transitions from state $A$ show that $S_{22} = \Bbb P(A \leftarrow A) = \frac{1}{2}$ and $S_{23} = \Bbb P(B \leftarrow A) = \frac{1}{2}$. We start in state $0$, so the initial distribution vector is $$\tau := \pmatrix{1&\cdot&\cdot&\cdot},$$ and thus the expected time to reach the absorbing state is $$\bar t = \tau (I - S')^{-1} {\bf 1} ,$$ where $\bf 1$ is the $4 \times 1$ vector with all entries $1$.

For our process carrying out this computation gives $\boxed{\bar t = 9}$.

Remark Once our process reaches $A$ it never goes back to $0$, and likewise once it reaches $B$ it never goes back to $A$ or $B$. So, we can split our Markov process into three simpler processes---one for the transition from $0$ to $A$, another for the transition from $A$ to $B$, and third for the movement from $B$ to $1$---in which case the linearity of expectation gives that $\bar t$ is just the sum of the expected durations $\bar t\!_{0A}, \bar t\!_{AB}, \bar t\!_{B1}$ of each of the constituent process. In particular, the computation you mentioned can be adapted to show that moving from $B$ to $1$ takes an average of $\bar t\!_{B1} = 6$ steps, which reduces the original problem to computing $\bar t\!_{0A}$ and $\bar t\!_{AB}$. In both cases the constituent Markov chains have exactly $2$ states, start in the sole transient state and at each step move to the sole absorbing state with some probability $r$: Specializing the above formula gives an expected time of $\frac{1}{r}$ steps to reach the absorbing state.

So, $\bar t\!_{0A} = \frac{1}{(1)} = 1$ steps and $\bar t\!_{AB} = \frac{1}{\left(\frac{1}{2}\right)} = 2$ steps, giving $\bar t = 1 + 2 + 6 = \boxed{9}$.