Most efficient to solve $(I + L)x = b$ where $L$ is graph Laplacian

190 Views Asked by At

Assume $G$ is a graph with $n$ vertices, and $L = D - W$ is its graph Laplacian matrix. I want to solve the linear equation $(I + L)x = b$, where $I$ is the identity matrix, $b$ is a constant vector, and $x$ is the unknowns. How can I efficiently solve $x$ of this equation?

Due to the special properties of $I + L$, which is symmetrical and positive definite, I can use Cholesky factorization to solve the system. But I think the complexity of this algorithm may be too high. Since $I + L$ is high-degreed sparse matrix, I guess there may exists a more efficient solver to this problem. Can anyone help me out?

2

There are 2 best solutions below

0
On

This kind of equation is a called a symmetric diagonally dominant linear system. Research into using graph-based algorithms to solve SDD systems is still fairly new but seems promising, with approximation algorithms that run in nearly linear time. However, these algorithms are still mostly theoretical do not offer a performance improvement for practical applications. If you do want to try using some of these solvers, Daniel Spielman maintains a package Laplacians.jl with a few implementations.

I don't recommend trying to implement these algorithms yourself, the literature on SDD systems is particularly dense (the first such publication was 130 pages long) and requires a good understanding of numerical linear algebra, including iterative solvers, preconditioning, and conjugate gradient methods. For practical purposes you should use existing implementations of iterative solvers, which are typically faster and more stable than matrix factorizations and other direct methods for sufficiently large systems.

0
On

For large sparse symmetric systems, the best methods are usually iterative, and a preconditioned conjugate gradient solver will be very efficient. However, given the diagonal perturbation $I$ in your equation, even a gradient descent solver $$x_{k+1} = x_k - \alpha(x_k + Lx_k)$$ should converge rather quickly. The time step restriction should be $\alpha \leq 1/(1+d)$, where $d$ is the maximum degree of the graph.

For preconditioned conjugate gradient, I would suggest to solve the system $$(I + D^{-1/2}LD^{-1/2})y = D^{-1/2}b$$ with the conjugate gradient method and then set $x = D^{-1/2}y$, where $D$ is the diagonal degree matrix. This is the preconditioning.

For the conjugate gradient method, there are many implementations of this; for instance, in Python you can look in the scipy package.