Expected first return time of Markov Chain

15.6k Views Asked by At

Given the following Markov Chain:

$$M = \left( \begin{array}{cccccc} \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 & 0 \\ \frac{1}{4} & \frac{3}{4} & 0 & 0 & 0 & 0 \\ \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & 0 & 0 \\ \frac{1}{4} & 0 & \frac{1}{4} & \frac{1}{4} & 0 & \frac{1}{4} \\ 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \\ 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \\ \end{array} \right)$$

I need to find $E(T_1 | X_0 = 1)$, with $T_1 = \inf\{ n \geq 1 : X_n = 1\}$, i.e. the expected first arrival time of M.

I know that I can recursively calculate the probability of arriving back at 1 after exactly n steps:

$$f_1(n) = P(X_n = 1, X_{n-1} \neq 1,...,X_1 \neq 1, X_0 = 1) $$

This can be done the following way:

$$f_1(n) = p_{i,i}(n)-\sum_{k=1}^{n-1}f_1(k)p_{i,i}(n-k)$$

where $p_{i,i}(n) = (M^n)_{i,i}$ is the probability of going from state i to state i in n steps.

So I would say that $$E(T_1 | X_0 = 1)= \sum_{n=1}^\infty nf_1(n)$$

is the expected return time to step 1. However I have no idea how to actually evaluate the series, can anybody help me please?

2

There are 2 best solutions below

0
On

I think Zoe's approach is the easiest but that she has the equations wrong.

Let

$$T_{11} = E(T_1\mid X_0=1) \\ T_{21} = E(T_1\mid X_0=2).$$

We want to find $T_{11}$. Considering the possible transitions between states $1$ and $2$ and their probabilities, we get equations:

$$T_{11} = 1+\dfrac{1}{2}T_{21}\qquad\qquad\qquad (1) \\ T_{21} = 1+\dfrac{3}{4}T_{21}\qquad\qquad\qquad (2)$$

Solving these simultaneously gives $T_{11} = 3$.

Note: Derivation of Equation $(1)$:

  • the $1$ is to count $1$ for the current state transition;
  • we have probability $1/2$ to move from State $1$ to State $2$, from where the remaining steps to return to State $1$ is $T_{21}$.

Equation $(2)$ is derived similarly.

0
On

I agree with Mick A's response. We can confirm it by another method of finding the expected return.

This Markov chain is reducible. We can actually create a closed irreducible subset out of the full state chain. $$ \begin{bmatrix} \frac{1}{2}&\frac{1}{2}\\ \frac{1}{4}&\frac{3}{4} \end{bmatrix} $$

We can then find the stationary distribution for this matrix since it's aperiodic, finite, and irreducible. This stationary distribution is: $$ \pi_1=\frac{1}{3} \text{ and } \pi_2 = \frac{2}{3} $$

If you don't know how to find the stationary matrix, just know it can be solved by either $\lim_{n\to\infty}P^n$ (wolfram alpha will do this) if the chain is aperiodic, or solving for $\pi P=\pi$.

Next, there is a theorem which states that if a Markov chain is finite and if it has a stationary distribution then $\mathbb{E}_x[T_x]=\frac{1}{\pi_x}$. In our case, $\mathbb{E}_1[T_1]=\frac{1}{\frac{1}{3}}=3$.