Meaning of the inverse of the Laplacian matrix

9.7k Views Asked by At

Given an undirected graph $G = (V,E)$, let $\bf A$ and $\cal L_{\bf A}$ denote its adjacency matrix and its Laplacian matrix, respectively. $\cal L_{\bf A}(i,i)$ is the degree of vertex ${{\bf{v}}_i}$, and $\mathcal L_{\bf A}(i,j) = -1$ if vertices ${{\bf{v}}_i}$ and ${{\bf{v}}_j}$ are connected.

I know the Laplacian matrix is exactly the discrete Laplace operator. Given a function $\phi : V \to \Bbb R$, and supposing that $\bf A$ is indexed by $\bf v$, i.e., $\bf A$$(i,j) = 1$ iff vertices ${{\bf{v}}_i}$ and ${{\bf{v}}_j}$ are connected, then

$$\Delta \phi=\cal L_{\bf A}{\phi(\bf v)}$$

What I need to know is the meaning of the inverse (or pseudoinverse if it is not invertible) of the Laplacian. Is there any meaningful interpretation of the inverse Laplacian? Is there any concrete meaning for $\mathcal L^{-1}_{\bf A}(i,j)$?

3

There are 3 best solutions below

0
On BEST ANSWER

One answer is in terms of the commute time distance. Let $C_{ij}$ denote the expected length of a random walk on the graph starting at node $i$, passing through node $j$, then returning to node $i$. Let $P$ denote the orthogonal projection against the constant vector. Let $L$ denote the graph Laplacian.

Then $L^\dagger \propto PCP$.

Because $L^\dagger$ is positive semi-definite and $L^\dagger \mathbf 1 = \mathbf 0$, this makes $C$ a Euclidean distance matrix, and we may compute the entries using $$ C_{ij} \propto (L^\dagger)_{ii} + (L^\dagger)_{jj} - 2 (L^\dagger)_{ij} ~~.$$ (The proportionality constant is the sum over all node degrees)

The Euclidean coordinates obtained from $C$ provide one "spectral" embedding of the graph, sometimes similar but distinct from the typical one using eigenvectors of the Laplacian. Von Luxburg talks about this in her tutorial on spectral clustering.

1
On

I gained a lot of this intuition by reading "Online Learning over Graphs" by Herbster et. al. $L^+$ can be thought of as the reproducing kernel of the Hilbert space H of real-valued functions over the vertices of the graph: $$f: V \to R^n$$ equipped with the inner product $$\langle f,g \rangle = f^T L g.$$ Since f lives in a Hilbert space equipped with an inner product, we can use the Riesz Representation theorem to conclude that the evaluation functional of f at any vertex, $f(v_i)$, being a linear functional, can be represented as an inner product: $$f(v_i)=<K_i,f>=K_iLf.$$ The pseudoinverse $L^+$ satisfies this property, which can be verified by noting that $$f_i=L^+_iLf_i.$$

Online Learning over graphs: The pseudoinverse of the graph laplacian is a key object in online learning algorithms for graphs. As usual, we can get more intuition about the graph laplacian and related mathematical objects by using the concept of conductance (it is no coincidence that the continuous version of the Laplacian appears in the heat equation). If we think of f as some heat distribution over the nodes (I am using distribution here in the loose sense, so that it need not be normalized), we can think of $Lf$ roughly as the flux induced at each of the nodes by that distribution over the graph. Taking this a step further, the representer theorem, namely that $f(v_i)=L^+_iLf$ could be interpreted as saying that the heat at each vertex can be expressed in terms of or derived from the flux through every vertex.

So we can qualitatively say a few things. That if L sends a heat distribution f over each node to a flux through each vertex, then $L^+$ sends some definition of fluxes over the graph back to some heat distribution that would have induced it. Note that intuitively, if the graph is disconnected this distribution should not be unique, since arbitrary constants added to the distributions in each of the connected components will not affect the flux induced through the graph. Mathematically, this is affirmed by noting that $$L^+=L^{-1}$$ iff all eigenvalues of L are non-zero which is exactly when the graph is connected.

Going back to the Online learning application, we can use the i-th row of $L^+$ as follows. First we have guesses for all nodes. Then we update our guess with knowledge of certain values (using projection for example) and we wish to know how this update affects our guess for vertex i. We do this by translating our updated "heat distribution" to an updated flux through all of those nodes, by calculating $Lf$. Then using this updated flux, $L^+_iLf=<L^+_i,f>$ will tell us i-th component of the distribution that would have produced this new flux. I.e., it is an updated guess for the value at vertex i.

EDIT:

I also glossed over the fact that the "inner product" is actually not a true inner product as stated, since if $f^*$ is an eigenvector of L corresponding to an eigenvalue of 0, then $<f^*,f^*>=0$. Therefore we must restrict our functions to be in the subspace of H spanned by the eigenvectors that have non-zero eigenvalues. In terms of the heat analogy, this means we can only look at heat distributions that induce some flux over the graph (plus the zero vector of course) in order to have a real inner product space.

0
On

The answer by Eugene Scvarts can be exteded upon using the paper The Principal Components Analysis of a Graph, and its Relationships to Spectral Clustering by Marco Saerens , Francois Fouss , Luh Yen and Pierre Dupont.

This paper shows that principal component analysis with the pseudoinverse Laplacian as covariance matrix maximally conserves commute distance. Thus, the pseudoinverse Laplacian is a covariance matrix for the graph.

Note that this interpretation also gives an alternative way to understand spectral clustering as explained in A Tutorial on Spectral Clustering by Ulrike von Luxburg.