Given a square idempotent $N \times N$ matrix $A$ with large $N$, and a priori knowledge of the rank $K$, what is the most efficient way to compute the $K$ eigenvectors corresponding to the $K$ non-zero eigenvalues?
Information:
- Matrix is idempotent, therefore all eigenvalues are $1$ or $0$.
- Matrix is not symmetric.
- $K \ll N$.
- I'd like to avoid numerical computation of the eigenvalues, as I already know them, i.e., there are $K$ eigenvalues of magnitude $1$ and $N-K$ eigenvalues of magnitude $0$.
If the matrix was symmetric, an eigendecomposition would give $A = Q\Lambda Q^T$, and $Q$ would be $K$ orthonormal columns and $N-K$ zero columns. Since it's not symmetric, I believe this will not be the case, so I'd settle for the closest such matrix.
Context: essentially think of my matrix $A$ as a small perturbation around a symmetric idempotent matrix of the same size. I need the $N \times K$ matrix $B$ which is closest to giving $BB^T = A$. This must be done numerically, so really, the advice that would be ideal is "use LAPACK routine XYZ with parameter ABC to avoid computing the eigenvalues". Unfortunately, I can't seem to find any such routines which don't compute the eigenvalues as part of the process.
As far as I can tell, this has nothing to do with eigenvalues. What you want is the vectors $x$ such that $Ax=x$. That is, you want to solve the system $(A-I)x=0$.
What is the best method for this? That depends on data you are not giving, like $N$ and the kind of entries in the matrix. For small $N$ (maybe a few thousands) one can try row-reduction (Gauss-Jordan) if there are sufficiently many entries removed from zero to keep the size of the pivots bounded. For a bigger matrix the best bet is likely an iterative method, like succesive over-relaxation.