Let $A$ be an $N \times N$ matrix with $N$ linearly independent eigen vectors $x_1,x_2,..,x_n$ and corresponding eigen values $\lambda_i$, where $|\lambda_1| \gt |\lambda_2| \ge... \ge |\lambda_N|$. Let $$x^{o}=\sum_{i=1}^N \alpha_i x_i, \alpha_1 \ne 0$$ and $$x^{n}=\frac{A^nx^{0}}{\|A^nx^{0}\|_{\infty}}$$. Show that as $n \to \infty, x^{n} \to \frac{x_1}{\|x_1\|}_{\infty}$ and $\|Ax^{n}\|_{\infty} \to \lambda_1$
Clearly $A^{n}x^{0}=\sum_{i=1}^{N}\alpha_i\lambda_i^nx_i=\lambda_1^n\sum_{i=1}^N\alpha_i\left(\frac{\lambda_i}{\lambda_1}\right)^nx_i$. If I look at the term $$\left\|\sum_{i=1}^N\alpha_i\left(\frac{\lambda_i}{\lambda_1}\right)^nx_i-\alpha_1x_1\right\|_{\infty} \le \sum_{i=2}^{N}|\alpha_i|\left\|\frac{\lambda_i}{\lambda_1}\right\|^{n}\|x_i\|_{\infty}$$ , then this goes to $0$ an $n \to \infty$. Hence $\sum_{i=1}^N\alpha_i\left(\frac{\lambda_i}{\lambda_1}\right)^nx_i \to \alpha_1x_1$ and since $\|.\|_{\infty}$ is continuous we have $\|\sum_{i=1}^N\alpha_i\left(\frac{\lambda_i}{\lambda_1}\right)^nx_i\|_{\infty} \to |\alpha_1\||x_1\|_{\infty}$. Hence $$-\frac{\sum_{i=1}^N\alpha_i\left(\frac{\lambda_i}{\lambda_1}\right)^nx_i}{\left\|\sum_{i=1}^N\alpha_i\left(\frac{\lambda_i}{\lambda_1}\right)^nx_i\right\|_{\infty}}\le x^{n}=\frac{A^nx^{0}}{\|A^nx^{0}\|_{\infty}}=\frac{\lambda_1^n\sum_{i=1}^N\alpha_i\left(\frac{\lambda_i}{\lambda_1}\right)^nx_i}{|\lambda_1|^n\left\|\sum_{i=1}^N\alpha_i\left(\frac{\lambda_i}{\lambda_1}\right)^nx_i\right\|_{\infty}} \le \frac{\sum_{i=1}^N\alpha_i\left(\frac{\lambda_i}{\lambda_1}\right)^nx_i}{\left\|\sum_{i=1}^N\alpha_i\left(\frac{\lambda_i}{\lambda_1}\right)^nx_i\right\|_{\infty}} $$
I have no issues proving this if the eigen value with maximum modulus is positive. But it has to positive for the second part exactly means that. It is not mentioned in the question. Now how do I conclude from here?? I see that the limits have to be exactly the same but I am unable to prove so.
Thanks for the help!!
If maximum of all eigenvalues (in modulus) is zero, then the matrix is zero, and you don't have to do this procedure. Secondly, your iterative scheme appears to me as that of power method. In general power method iterations for eigenvector need not converge, but they get closer to aligning with the eigenspace. If the approximate eigenvector said are of uni norm (say, Euclidean norm), then the sequence will get closer to +/- of the eigenvector. Refer any numerical analysis book for power method.