Improving the performance of eigs for a large spd Problem

581 Views Asked by At

I have two large (think around $100.000\times 100.000$), sparse, real symmetric and positive definite matrices $A$ and $B$ and I want to find the smallest generalized eigenvalue $$Ax = \lambda_{\min} Bx$$ This can be done in MATLAB by calling [V,D] = eigs(A,B,10,'sm',struct('issym', 1, 'isreal', 1)), but the problem is that the convergence is too slow so a lot of large systems of equations need to be solved for an acceptable accuracy. Now this is no fun at all to wait for so I'd like to find a better way to solve the same problem.

Do you have any suggestions? In case you need more context, the Matrices stem from a galerkin algorithm for PDEs. The structure of the basis guarantees that $B_{ij} \ge 0$ and $\sum_{i,j} B_{ij} = 1$, if that is useful.

1

There are 1 best solutions below

0
On BEST ANSWER

Finally, uranix hint with preconditioning lead me to a solution. The key performance problem comes from solving lots of systems of the form $Ax = b$ with our $A$. Fortunately, $A$ has so many nice properties, that the PCG algorithm works well when using ichol as a preconditioner.

Thus using eigs' capability to take $x\mapsto A^{-1}x$ as a function leads to this solution:

L = ichol(A); % Compute incomplete cholesky decomposition of A as a preconditioner for pcg
n = size(A,1); % eigs needs the size of A if we specify it as a function
[V, D] = eigs(@(x)pcg(A,x,1e-12,200,L,L'),n,B,10,'sm',struct('issym',1,'isreal',1)); % This works nicely

eigs in my case needed $55$ iterations of pcg and each pcg needed around $180$ iterations to converge.