This is from Peter Lax's Linear Algebra, Chapter 18. If $A$ is a self adjoint matrix, and if we denote by $U$ the matrix whose columns are its eigenvectors $$U = (u_1,\dots,u_n)$$ If the corresponding eigenvalues are $d_1, \dots, d_n$, then $A = UDU^T$, and $A^k = UD^kU^T$. It follows from this formula that the columns of $A^k$ are linear combinations of the eigenvectors of $A$ of the following form: $$b_1d_1^ku_1 + \cdots + b_nd_n^ku_n$$ where $b_1, \dots, b_n$ do not depend on $k$.
We now assume that the eigenvalues of $A$ are distinct and positive, and arrange them in decreasing order $$d_1 > d_2 > \dots > d_n > 0$$ It then follows that provided $b_1 \neq 0$, the first column of $A^k$ is very close to a constant multiple of $u_1$.
I don't understand here why this would be close to a multiple of $u_1$ for large $k$. Intuitively, it does make some amount of sense as for large powers $k$, $d_1^k$ should dominate the others, but I don't know how to formulate this rigorously.
Divide through by $d_1^k$ and note that the various $\frac{d_i^k}{d_1^k} \longrightarrow 0$ except for $d_1^k$. So the leading term eventually dominates the sum. (That is, by ignoring the subleading terms, the relative error goes to zero; we do not claim the absolute error decreases, because it does not.)
(This method is analogous to the technique used to show that rational functions converge to a particular limit: divide the numerator and denominator by the largest power of the variable so that the subleading terms go to zero in the limit.)