Consider a Markov Chain with $N$ states. At time $k$, the probability of being at state $i$ is $π_i(k)$. Let $π(k)=[π_1(k),π_2(k)… π_N(k)]^T$ and let $X(k)$ be the value of the state at time $k$. We can define the transition probabilities between states as:
$p(i,j)≐$Probability{$X(k+1)=j$ given that $X(k)=i$}
Where \begin{bmatrix} p(1,1) & ... &p(1,N) \\ : & : & : \\ p(N,1) & ... & P(N,N) \end{bmatrix}
The matrix P is a stochastic matrix: $p(i,j)\ge0$ and the sum from $j=1$ to $N$ equals 1 for all $i=1,2,...N$.
I want to show two things.
1) $\pi(k+1)=\pi(k)P$
2) $\lim_{k\rightarrow\infty}\pi(k)=\pi^*$, where $\pi^*$ is a unique stationary distribution such that for any initial $\pi(0)$ satisfying $\pi_i\ge0$ and $\sum_{i=0}^{\infty} \pi_i(0)=1$.
I am thinking of using a distance relationship between distributions: $\pi_i^a$ & $\pi_i^b$. Where $d(\pi_i^a,\pi_i^b)=\sum_{i=1}^{N} |\pi_i^a-\pi_i^b|$.
I am not really sure where to go from here, and would appreciate some help if anyone knows how to approach this.