A player throws a coin until he gets three consecutive faces. We are interested in the expected number of pitches thrown by assuming that the piece is balanced and that the throws are independent.
To perform the calculations, we consider the Markov chain on the number of consecutive faces before a start up to a maximum of three. The four possible states according to the results of the last three throws to face with $F$, $P$ for tail of the coin, and $X$ for head and tail are presented:
$ \begin{array}{} \hline State& Last-3-launched & Initial-probability \\ \hline 3& & FFF & ½ \times ½ \times ½ = \frac{1}{8}\\ \hline 2& & PFF & ½ \times ½ \times ½ = \frac{1}{8}\\ \hline 1& & XPF & 1 \times ½ \times ½ = \frac{1}{4}\\ \hline 0& & XXP & 1 \times 1 \times ½ = \frac{1}{2}\\ \hline \end{array} $
The transition matrix for states $0$, $1$, $2$, and $3$ is
$$ P= \left( \begin{matrix} ½ & ½ & 0 & 0\\ ½ & 0 & ½ & 0 \\ ½ & 0 & 0 & ½ \\ 0 & 0 & 0 & 1 \end{matrix} \right) $$
The state $3$ whichi correspond to $FFF$ marks the end of the game. We define
$\phi_i = E(Number - of - extra - throws - before - the - state - 3 | state - i)$ for $i = 0, 1, 2, 3$. Obviously, $\phi_3 = 0$. By conditioning on the outcome of the first launch further from the state $i$, the following system of equations is obtained:
$$\phi_2 = \frac{1}{2} \phi_3 + \frac{1}{2} \phi_0 + 1$$ $$\phi_1 = \frac{1}{2} \phi_2 + \frac{1}{2} \phi_0 + 1$$ $$\phi_0 = \frac{1}{2} \phi_1 + \frac{1}{2} \phi_0 + 1$$
I don't understand how they manage to get this system of linear equations. Does someone could tell me how to get it? Is there a property conditional expectations I don't know?
This is the very first application of the first step analysis, which introduce the usage of law of total expectation and the Markov property. Let $X_{t}$ and $X_{t+1}$ be the current state and next state respectively, $N_{t}$ be the number of throws to reach the absorption state $3$ at time $t$. Then for $i \neq 3$, $$ \begin{align*} \phi_i & = E[N_{t}|X_t = i] \\ & = \sum_{k=0}^3 E[N_{t}|X_t = i, X_{t+1} = k]\Pr\{X_{t+1} = k|X_t = i\} \\ & = \sum_{k=0}^3 E[N_{t+1}+1|X_{t+1} = k]p_{ki} \\ & = \sum_{k=0}^3 (\phi_k + 1)p_{ki} \\ & = \sum_{k=0}^3 \phi_jp_{ki} + 1 \end{align*}$$ Step $2$ follows from law of total expectation, step 3 follows from the fact that $$ N_t = N_{t+1} + 1$$ unless $X_t = 3$ such that $N_t = N_{t+1} = \ldots = 0$, i.e. it takes one more throw at time $t$ than $t+1$ if it is not absorbed. And Step $3$ used the Markov property, and the definition of the transition matrix. Step $4$ just used back the definition of $\phi_i$ (note that this is time independent), and in last step we used the fact that the sum of row entries just equals to one as it is a probability.