I have two large (think around $100.000\times 100.000$), sparse, real symmetric and positive definite matrices $A$ and $B$ and I want to find the smallest generalized eigenvalue
$$Ax = \lambda_{\min} Bx$$
This can be done in MATLAB by calling [V,D] = eigs(A,B,10,'sm',struct('issym', 1, 'isreal', 1)), but the problem is that the convergence is too slow so a lot of large systems of equations need to be solved for an acceptable accuracy. Now this is no fun at all to wait for so I'd like to find a better way to solve the same problem.
Do you have any suggestions? In case you need more context, the Matrices stem from a galerkin algorithm for PDEs. The structure of the basis guarantees that $B_{ij} \ge 0$ and $\sum_{i,j} B_{ij} = 1$, if that is useful.
Finally, uranix hint with preconditioning lead me to a solution. The key performance problem comes from solving lots of systems of the form $Ax = b$ with our $A$. Fortunately, $A$ has so many nice properties, that the PCG algorithm works well when using
icholas a preconditioner.Thus using
eigs' capability to take $x\mapsto A^{-1}x$ as a function leads to this solution:eigsin my case needed $55$ iterations ofpcgand eachpcgneeded around $180$ iterations to converge.