I'm writing an algorithm to find the eigenvalues and eigenvectors of a positive definite matrix with the power iteration method. I know it's very crude, of course, and there are better methods, but this is just a trivial application and I don't want to go much beyond (nor do I have access to any libraries, the language doesn't have any). I'm finding the eigenvalues with a power iteration, then at the end of each I redefine the matrix as:
$ \mathbf{M}^{(n)} = \mathbf{M}^{(n-1)}-\lambda_{n-1}\mathbf{e}_{n-1}\mathbf{e}_{n-1}^T $
to remove the eigenvalue, and repeat the process. I get pretty excellent values of the eigenvalues - they match with the solution I can get with NumPy up to 1E-6 precision, easily. However the eigenvectors for all but the first one or two eigenvalues are a complete mess. I perform a Gram-Schmidt orthogonalisation on them after the power iteration finishes, and I even check that they return the correct eigenvalues with the original matrix as their Rayleigh quotient - they do - but still, they're very different from the ones I get from NumPy. What could I look into? Is it just a matter of numerical noise, and there is no chance to improve unless I move to better algorithms, or am I missing something obvious?
The missing point is that the matrix is not symmetric (usually implicitly assumed with positive definiteness). If it is not symmetric, then the eigenvectors are very likely not orthogonal so there is no reason to assume that $M^{(n)}$ and $M^{(n-1)}$ share the same eigenvectors.
What you need to do is to use the update $$ M^{(n)}=M^{(n-1)}-\lambda_{n-1}e_{n-1}f_{n-1}^T, $$ where $f_{n-1}$ is the left eigenvector corresponding to the eigenvalue $\lambda_{n-1}$ normalized such that $f_{n-1}^Te_{n-1}=1$. It can be computed by the same power method applied on the transpose of $M^{(n-1)}$.