Limiting distribution of Markov chain with transient and recurrent class

370 Views Asked by At

Consider the following transition probability matrix P with the state space S = {0, 1, 2, 3, 4}

P:

|1   0   0    0    0| 
|0.5 0   0.5  0    0|
|0   0.5 0    0.5  0|
|0   0   0.5  0  0.5|
|0   0   0    0    1|

Classes are {0}, {1,2,3}, {4} where 0 and 4 are recurrent whereas 1,2,3 is transient . Find lim n→∞ p(n)ij , for all i, j ∈ S How do I find the limiting distribution for these type of matrices which are a mixture of transient and recurrent? It will be very helpful if someone gives the link to more problems of this type.

1

There are 1 best solutions below

1
On

Just to explain @Omnomnomnom's point and this wiki page more in detail applying to your question:

First, note that "(i)f player one has $n_1$ pennies and player two $n_2$ pennies, the probabilit(y) $P_1$ ... that players one ... will end penniless (is)":

\begin{aligned}P_{1}&={\frac {n_{2}}{n_{1}+n_{2}}}\\[5pt]\end{aligned}

In your case, it's just like the probability of the first player (first player initally with $n_1$ coins and the second player with $4 - n_2$ coins to start off the game) goes peniness (state $0$) or wins the game (state $4$).

Therefore, if you start off with $0$ coins, since state $0$ is an absorbing state, the probability that you stay at state $0$ is $1$.

If you start off with $1$ coins, that means, your opponent is starting off with $3$ coins, i.e. $n_1 = 1, n_2 = 3$. Plugging this into the equation, yields the probability that the player 1 ends penniless is $\frac{3}{4}$ and the probability that the player 1 wins the game is $\frac{1}{4}$. You can apply the same logic to cases where the player 1 starts off with $2,3$ or $4$ coins.

As to the proof of the equation, you can refer to Ross' textbook, Ch 4.5.1 and look at the case where $\frac{p}{q} = 1$