Markov chain. Is steady state a scaled eigenvector of transition probability matrix

697 Views Asked by At

So suppose we have transition matrix P for a Markov chain and suppose it satisfies the relevant criteria so that

$$ \lim_{n\rightarrow \infty} P^{(n)} = \pi $$

is well behaved and is some steady state distribution.

Then if we take one of the columns of $\pi$ , say $ \vec{v}$ we will have that $P\vec{v} = \vec{v}$. So the steady state is just an eigenvector of the transition matrix.

But why then do we learn to find $P^{(n)}$ by diagonalising and then expanding when it would be just easier to find the eigenvector at the start? I mean we find the eigenvector in order to diagonalize anyway.

Or am I misunderstanding something?

Doesn't the above also imply that for any initial state $\vec{w}$ that $ \lim_{n\rightarrow \infty} P^{(n)} \vec{w} = \vec{v}$ and are there criteria for the matrices for which this holds?

1

There are 1 best solutions below

0
On

It depends on what you're asking. If your question is just "what's the steady state", all you need is the largest eigenvalue ($1$ for any Markov matrix) and its eigenvector(s).

If you want more detail about the behavior than just what that steady state is, you'll need more than just the one eigenvector for the largest eigenvalue. If say, you want to know how fast the deviations from that steady state decay, you'll need the next eigenvalue down. For the full picture and exact explicit values of $P^n$, you need all of them.

The full diagonalization is also the theoretical underpinning of the simple calculation mentioned earlier.