Consider the following stochastic matrix: $$M = \left(\begin{array}{ccc} \frac{1}{2} & \frac{1}{6} & \frac{1}{6}\\ \frac{1}{2} & \frac{1}{2} & \frac{1}{6}\\ 0 & \frac{1}{3} & \frac{2}{3} \end{array}\right)$$
It's eigenvectors are $\lambda = 1$, with associated eigenvector $\big(\frac{2}{3} ,1, 1\big)^{\top}$ and $\lambda = \frac{1}{3}$ with associated eigenvector $(0, -1, 1)$ and algebraic multiplicity 2. I have a suspicion that $\lim_{n \to \infty} M^n v = \big(\frac{1}{4} ,\frac{3}{8}, \frac{3}{8}\big)^{\top}$, for any probability vector $v$, but I can't prove this. What I would do in the 2-dimensional case is to use the eigenbasis of $M$ to expand $v$, and note that the components in the direction of eigenvectors, whose corresponding eigenvalues are less than one, decrease to zero in the limit. However, since this stochastic matrix has repeated eigenvalues, this doesn't work.
So I have two questions:
1) Does $\lim_{n \to \infty} M^n v$ converge for any probability vector $v$? If so, what vector does it converge to and why?
2) If the limit always converges, how can I determine the smallest $n$ such that the first component of $M^n (1,0,0)^{\top}$ will be within $\epsilon$ of the first component of the limit, and likewise with the other two components?
1) Firstly, we can consider the $\ell_1$ normalized eigenvector $\mathbf {v}= \begin{bmatrix} 1/4 & 3/8 & 3/8 \end{bmatrix}^T,$ which corresponds to the dominant eigenvalue $\lambda_1 =1.$ The limit $\lim\limits_{n\to\infty} M^n \cdot \mathbf x$ always exists for any probability vector $\mathbf{x}$ and is equal to $\mathbf v.$ Indeed, one way to approach the dominant eigenvector $\mathbf{v}$ is through Power iteration or Power Method. This method works due to the fact that $\lvert\lambda_1\rvert = 1 > \lvert\lambda_j\rvert,\quad j=2,3.$ Actually, that's the case when the transition matrix is irreducible and aperiodic.
What Power iteration states is that we can approach the (normalized) dominant eigenvector starting from almost any vector $\mathbf{x_0}$ through the formula: $$\mathbf{x}_{k+1}=\frac{M\cdot \mathbf{x}_k}{\| M\cdot \mathbf{x}_k\|}\overset{k\to\infty}{\longrightarrow}\mathbf v,$$ considering the $\ell_1$ norm $\|M \cdot \mathbf{x}_k\|_1$. Having described this method, I guess it would be easy enough to answer question 2).