stationary probability for markov chain

24 Views Asked by At

I am trying understand the solution for the Prob 1. in https://ocw.mit.edu/courses/mathematics/18-445-introduction-to-stochastic-processes-spring-2015/assignments/MIT18_445S15_homework3_sol.pdf

The problem comes down to solve for the stationary distribution for the discrete Markov Chain, which has $2n + 2$ states [1, 2, \dots, 2n + 2]. The transition matrix is defined as: $$ P_{1,2} = 1\\ P_{2,1} = 1-p\\ P_{2n+1, 2n+2} = 1-p\\ P_{2n+2, 2n+1} = 1\\ P_{2i+1, 2i+2} = P_{2i+2, 2i+1} = 1 - p, i = 1 \dots n-1\\ P_{2i+2, 2i+3} = P_{2i+3, 2i+2} = p, i = 0 \dots n-1 $$

The author states that: "It is easy to observe that $\pi(x_1) = \pi(x_{2n+2}) = a$ and $\pi(x_j ) = b$ for $j \in [2, 2n + 1]$". I can see the argument holds for $\pi(x_1) = \pi(x_{2n+2}) = a$ if we reverse the chain (e.g. $x_i$ will be $x_{2n+2-i-1}$ ), but I can't see why the second part is true.

Could anyone provide some proof and intuitions? thanks