Eigenvectors of matrix with power iteration method?

1.2k Views Asked by At

I'm writing an algorithm to find the eigenvalues and eigenvectors of a positive definite matrix with the power iteration method. I know it's very crude, of course, and there are better methods, but this is just a trivial application and I don't want to go much beyond (nor do I have access to any libraries, the language doesn't have any). I'm finding the eigenvalues with a power iteration, then at the end of each I redefine the matrix as:

$ \mathbf{M}^{(n)} = \mathbf{M}^{(n-1)}-\lambda_{n-1}\mathbf{e}_{n-1}\mathbf{e}_{n-1}^T $

to remove the eigenvalue, and repeat the process. I get pretty excellent values of the eigenvalues - they match with the solution I can get with NumPy up to 1E-6 precision, easily. However the eigenvectors for all but the first one or two eigenvalues are a complete mess. I perform a Gram-Schmidt orthogonalisation on them after the power iteration finishes, and I even check that they return the correct eigenvalues with the original matrix as their Rayleigh quotient - they do - but still, they're very different from the ones I get from NumPy. What could I look into? Is it just a matter of numerical noise, and there is no chance to improve unless I move to better algorithms, or am I missing something obvious?

2

There are 2 best solutions below

4
On BEST ANSWER

The missing point is that the matrix is not symmetric (usually implicitly assumed with positive definiteness). If it is not symmetric, then the eigenvectors are very likely not orthogonal so there is no reason to assume that $M^{(n)}$ and $M^{(n-1)}$ share the same eigenvectors.

What you need to do is to use the update $$ M^{(n)}=M^{(n-1)}-\lambda_{n-1}e_{n-1}f_{n-1}^T, $$ where $f_{n-1}$ is the left eigenvector corresponding to the eigenvalue $\lambda_{n-1}$ normalized such that $f_{n-1}^Te_{n-1}=1$. It can be computed by the same power method applied on the transpose of $M^{(n-1)}$.

5
On

There are two problems at play here.

First: In a power iteration method you usually don't redefine your matrix by getting rid of the dyadic problem of the eigenvectors. It's more common to simply subtract the projection to already found EVs from your current iteration. You do $\tilde{v}_i = v_i - \lambda_1 e_1 e_1^Tv_i$ and continue with $\tilde{v}_i$. This is mathematically equivalent to your approach but does not rely on the calculation of dyadic products and does not destroy any sparsity you have in $M$.

Second: The power method works by projecting your current iterative into the EV-decomposition $$v_i = \sum \limits c_k e_k$$ where $e_k$ are the EVs. One iteration now gives you $$ v_{i+1} = \sum \lambda_k c_k e_k $$ i.e. every coefficient is multiplied with the eigenvalue. Over time, the largest eigenvalue will dominate and be the only remaining one, numerically speaking. Since you have a matrix with complex oscillators, at one point you don't have a single largest Eigenvalue anymore. At this point, the algorithm can do a lot of things, including convergence to any of them or periodically switching from one to the other. Since you are going to find complex EVs, using a real $v_i$ will never give you a good result for an EV.

Edit: OP clarified, that there are no complex eigenvalues. Theory about values with the same magnitude still holds.