This is my first post on here, sorry for any formatting issues. This process I believe is called a Markov chain, but I don't have the necessary math knowledge required to understand it fully. Essentially, I have a vector in the form: \begin{pmatrix} 1 & 0 & 0 & ... & 0 \end{pmatrix} I also have a matrix where the sum of each row is 1 (Edit: all elements must be positive as well). An example is the following matrix: \begin{pmatrix} 1/6 & 2/3 & 1/6\\ 5/12 & 1/6 & 5/12\\ 1/6 & 2/3 & 1/6 \end{pmatrix}
The vector is repeatedly multiplied by the matrix, until the Euclidean difference of the vector between iterations is below a certain threshold value.
My question is, is there a way to find the expected number of iterations required? For example, for a threshold value of 0.0001, I found empirically that the number of iterations was about 30, and also that the size of the matrix/vector does not affect this value. Is there a way to confirm this?
Since the sum of each row is $1$, then the $m \times m$ matrix $P$ is a right stochastic matrix.
Assuming you can diagonalize $P$, then
$ P = V D V^{-1} $
Now,
$ x_{n+1} = P x_{n} = V D V^{-1} x_n $
Hence,
$ x_n = V D^{n} V^{-1} x_0 $
The elements of $D$ can be ordered in descending order from $1$ down to the smallest eigenvalue in absolute value. That is,
$ 1 = \lambda_1 \gt | \lambda_2 | \ge | \lambda_3 | \ge \dots \ge | \lambda_m | $
From the last equation, we can write,
$ V^{-1} x_n = D^n V^{-1} x_0 $
Define $y_n = V^{-1} x_n$
Then
$ y_n = D^n y_0 $
As $n \to \infty$, $y_n \to y_\infty = [ y_{01} , 0, 0, \dots , 0]^T $
Now,
$ y_n - y_\infty = D^n y_0 - e_1 e_1^T y_0 = [ D^n - e_1 e_1^T ] y_0 $
The matrix $[ D^n - e_1 e_1^T] $ has a zero $(1,1)$ entry, and the rest of the diagonal elemens are $\lambda_2^n, \lambda_3^n , \dots , \lambda_m^n$. Since $\lambda_2 $ is greater in absolute value than all the other eigenvalues, it is dominant. Therefore, for $n$ sufficiently large, we have
$ \| y_n - y_\infty \| = \| V^{-1} (x_n - x_\infty ) \| \approx |\lambda_2|^n \| y_0 \| = |\lambda_2|^n \| V^{-1} x_0 \| $
But,
$ \| V^{-1} (x_n - x_\infty ) \| \le L_{max} \| x_n - x_\infty \| $
and
$ \| V^{-1} x_0 \| \ge L_{min} \| x_0 \| $
where $L_{max}$ and $L_{min} $ are the maximum and minimum square roots of the eigenvalues of $V^{-T} V^{-1} $ (i.e. the maximum and minimum singular values of $V^{-1}$).
Therefore,
$ L_{min} \| x_0 \| | \lambda_2 |^n \le L_{max} \| x_n - x_\inf \| $
Now, we want $ \| x_n - x_\infty \| \le \varepsilon $, then will have
$ L_{min} \|x_0 \| |\lambda_2|^n \le L_{max} \varepsilon $
Which implies
$ n \ge \dfrac{ \log( L_{max} \varepsilon) - \log( L_{min} \| x_0 \| ) }{ \log | \lambda_2 | } $
I've implemented the repeated multiplication, and cross checked the number of iterations required with that given by the above formula, and the two match. For the given $3 \times 3$ example, and with $\varepsilon = 10^{-8}$, and the initial vector being $e_1$, the number of iterations required is $27$ as obtained by direct evaluation, and by the above formula.
As another example, consider the $4 \times 4$ matrix
$ P = \begin{bmatrix} 0.1 && 0.3 && 0.2 && 0.4 \\ 0.5 && 0.1 && 0.2 && 0.2 \\ 0.15 && 0.3 && 0.05&& 0.5 \\ 0.7 && 0.2 && 0.05 && 0.05 \end{bmatrix} $
And let $\varepsilon = 10^{-8} $ and $x_0 = [1, 2, 3, 4]^T $
The actual minimum number of multiplications necessary to get $\| x_n - x_\infty \| \lt \varepsilon $ was $ 23 $. While the value obtained with the formula is $21$. The reason for the discrepancy is that we made an approximation about the magnitude of $\| x_n - x_\infty \|$ and based it solely on $|\lambda_2|^n$ (when in fact, it is a function of the other eigenvalues as well).
As a third example, let the $5 \times 5$ matrix $P$ be given by
$ P = \begin{bmatrix} 0.15 && 0.25 && 0.25 && 0.25 && 0.1 \\ 0.1 && 0.3 && 0.2 && 0.2 && 0.2 \\ 0.15 && 0.25 && 0.15 && 0.4 && 0.05 \\ 0.1 && 0.2 && 0.3 && 0.4 && 0 \\ 0.15 && 0.15 && 0.15 && 0.15 && 0.4 \end{bmatrix} $
And let $ \varepsilon = 10^{-8} $ and $ x_0 =[1,2,3,4,5]^T $
The actual number of iterations required was $19$. In contrast, the number of iterations predicted by the formula is $20$.