When is diagonalization necessary if finding the steady state vector is easier?

850 Views Asked by At

So I have been learning about Markov-Chains lately when I stumbled across something that got me thinking. I was working on the relatively simple Markov-Chain where the transition matrix is

$A=\begin{bmatrix}0.5&0.3&0.2\\0.1&0.1&0.2\\0.4&0.6&0.6\end{bmatrix}$ and the inital state is $V=\begin{bmatrix}17'000\\12'000\\3'000\end{bmatrix}$ So, in order to find out how the system is when it has stabilized, I would calulate $\lim_{n\to \infty}A^nV$. However, I could also just find the steady state vector V* off the get go by row reducing $[A-I|0]$ and save a huge ton of work. (But i will need to scale it.) Will this always work? Is there any instance where I have to actually diagonalize the steady state matrix (given that this is possible) in order to find the stable state of the system? This is what i have learned to do initially, but it just seems like a huge waste of time if I can be guaranteed to get the solution by just doing 1 simple row reduction and scaling instead. :)

2

There are 2 best solutions below

4
On BEST ANSWER

If the limit exists, then it will be a solution to $Ax=x$. But it is possible for this to not have a unique solution. In this case you need to expand the initial condition in the whole eigenvector basis (or generalized eigenvector basis, perhaps) in order to extract the limit. It turns out that "generically" this does not happen: if you have an irreducible Markov chain and the limit exists, then the limit is unique. You can ensure that the limit exists by assuming aperiodicity.

0
On

In a markov chain matrix, all its eigenvalues are less or equal than 1 ( i.e. $\lambda_i\leq 1$, you can prove this theorem) . Then, the stable state only depends on $\lambda=1$