Finding eigenvector by taking the square of a matrix

78 Views Asked by At

In one of the old lecture notes, I saw a different approach of finding eigenvector by taking the square of a matrix. Now I see it is working well and I want to use it in a study & paper. Although searching in many resources, I couldn't find any reference for this method. Can anybody give a reference or name of this method? Regards. Please see attached note:

enter image description here

2

There are 2 best solutions below

0
On

Power iteration.

It works because taking a high power of your matrix makes it approximate a projection operator on the space spanned by the eigenvector corresponding to the eigenvalue of largest magnitude.

There are two assumptions: there must be a unique eigenvalue of largest magnitude, and your initial vector (you used $(1,1,1)^T$, hence the sum of the columns) should have a nonzero component along the eigenvector.

0
On

This is, as Wouter correctly stated, called a power iteration. It works based on the eigen-decomposition of any starting vector $x$. You use the fact, that $x = \sum \limits_{i=1}^n \alpha_i v_i$ with $v_i$ beeing the $i$-th eigenvector.

If you now calculate $A^k x$ you get $A^kx = \sum \limits_{i=1}^n \alpha_i \lambda_i^k v_i$. For $k\rightarrow \infty$ all terms, but the one with the largest magnitude eigenvalue will vanish in comparison. By sacling the result (and all matrix columns) you will get a numerical approximation to the eigenvector corresponding to the eigenvalue with the largest magnitude.

This will however fail, if you have multiple eigenvalues with the same magnitude.

This is a very special variant of a Krylov Subspace method. If you use these more sophistacetad methods (Ritz Ansatz) you can approximate the EVs of any matrix