Stationary distribution in a Markov process.

43 Views Asked by At

Consider the homogeneous Markov process with matrix $W$ which describes the "probability of transition" to pass from a state $a$ to $b$. So in the time $t+1$ the probability to be in the state $a$, $p_a(t+1)$, is given by $p_a(t+1) = \sum_b W_{ab}p_b(t)$.

Now in the case of ergodic matrix, I know that there exists a unique eigenvalue $\lambda=1$, while the others have its own module less than $1$. Now my question:

Consider the eigenvector that corresponds to the eigenvalue $1$, say $(p_a)$, and let $(v^0_a)$ a vector with nonnegative components, which satisfies $\sum_a v^0_a = 1$. ( of course also $(p_a)$ has this property: $\sum_a p_a = 1$). Then $$ \lim_{n\rightarrow \infty} \sum_b W^{(n)}_{ab}v_b^0= p_a \qquad\qquad for\,\,every \,\,a $$ In other words, given any initial probability distribution $( v^0_a )$, the probability distribution tends to $(p^a)$ for $n\rightarrow \infty$.

Is it possible to show this statement using the tools of linear algebra ( as diagonalization )? If yes how? Is the converse (if there exists a unique stationary distribution then it is the eigenvector of eigenvalue $1$ ) true?