Please can someone help me to understand stationary distributions of Markov Chains?

5.4k Views Asked by At

I'm currently trying to understand (intuitively) what a stationary distribution of a Markov Chain is? In our lecture notes, we're given the following definition:

def

This was of little benefit to my understanding, so I've tried searching online for a more useful explanation. I then found the following video, which improved my understanding to the extent that I now understand that stationary distributions are to do with looking at what happens to the probabilities at each state within a Markov Chain when time becomes infinitely large. This is still not a sufficient enough understanding of the concept though.

For example, I've been asked to show that $$ \pi_{a} = \left( \frac{2}{5}, \frac{3}{5}, 0, 0, 0 \right) \\ \pi_{b} = \left( 0, 0, 1, 0, 0 \right) \\ \pi_{c} = \left( 0, 0, 0, \frac{3}{5}, \frac{2}{5} \right) $$ are stationary distributions with respect to the Markov Chain with one-step transition martix $$ \mathbf{P} = \left( \begin{array}{ccccc} \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 \\ \frac{1}{3} & \frac{2}{3} & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & \frac{2}{3} & \frac{1}{3} \\ 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \end{array} \right) $$

How would you do this? What is a stationary distribution, with respect to this example?

Also, could someone please confirm that I'm correct in thinking that the notation $p_{ij}$ denotes the probability of the process moving from the state $i$ to the state $j$?

1

There are 1 best solutions below

1
On BEST ANSWER

"Also, could someone please confirm that I'm correct in thinking that the notation $p_{ij}$ denotes the probability of the process moving from the state $i$ to the state $j$?" $(*)$

Correct


"How would you do this? What is a stationary distribution, with respect to this example?"

If the chain starts in state $3$ it stays there forever because according to $(*)$ there is zero probability to move to another state.

Therefore

$\pi_{b} = ( 0, 0, 1, 0, 0)$

is an obvious stationary distribution.

If the chain starts in state $1$ or $2$ it stays there forever because according to $(*)$ there is zero probability to move to another state.

If the chain starts in state $4$ or $5$ it stays there forever because according to $(*)$ there is zero probability to move to another state.

Now you can treat these as two $2 \times 2$ matrices and use the result that a vector which fulfills:

$\mathbf{\hat{\pi}} \mathbf{P} = \mathbb{\hat{\pi}}$ $\:\:(**)$

is a stationary distribution.

So you solve these two sets of systems of equations to get the remaining stationary distributions. Here you also need to use that $\hat{\pi}$ is a probability vector; that is, its components sum to one.


"I now understand that stationary distributions are to do with looking at what happens to the probabilities at each state within a Markov Chain when time becomes infinitely large"

You also have this theorem that can be good to know:

If the Markov chain is irreducible and aperiodic then

$\lim \limits_{n \to \infty} P^n = \hat{P}$

where $\hat{P}$ is a matrix whose rows are identical and equal to the stationary distribution $\mathbb{\hat{\pi}}$ for the Markov chain defined by equation $(**)$.