linear algebra applied in discrete Markov Process

33 Views Asked by At

We give an abstract description of the discrete Markov process as follows: Consider a system $A$ with $n$-possible states: $S_{1},\ S_{2},\cdots, S_{n}$. In each step of this process, the system randomly alters from one state to another. We use the notation $A(k)= S_{l}$ to denote the event $A$ is in state $S_{l}$ in step $k$.'' Suppose the change of $A$ satisfies the following rule: Assume $A(k)= S_{i}$. Then $$ \text{Probability}(A(k+1)= S_{j})=t_{i,j} $$ This number $t_{i,j}$ does not depend on $k$. We define the transition matrix $T\in M(n,n)$ by setting its $(i,j)$th entry to be $t_{i,j}$. In other words, $T_{i,j}=t_{i,j}$. Now suppose we know that $A(0)=S_{1}$. We consider the \textbf{row vector} $$ \vec{v}_{k}=[\text{Probability}(A(k)= S_{1}),\ \text{Probability}(A(k)= S_{2}),\cdots,\ \text{Probability}(A(k)= S_{n})]. $$ Show that for any positive integer $k$, we have $$\vec{v}_{k}=[1,0,\cdots, 0]\cdot T^{k}.$$