What does the value of eigenvectors of a graph Laplacian matrix mean?

10.9k Views Asked by At

I know that the eigenvectors of a Laplacian matrix of a graph are so important. They show the locality over the graph (as I know). But whatever I've read about an eigenvector of Laplacian graph is about the smoothness of the eigenvector and the relative value of neighbor nodes in an eigenvector (how close the value of two neighbor nodes are in an eigenvector). Is there any intuition behind the exact values in an eigenvector?

For example, if $u_i$ is an eigenvector and $u_{ij}$ shows the $j$-th value of $u_i$, and $u_{ij}$ is the maximum value in $u_{i}$, can we say anything about node $j$? Do higher values show anything relative to lower values? Or does the sign of the value show anything?

I want to get an intuition about the values of an eigenvector.

2

There are 2 best solutions below

13
On BEST ANSWER

For intuition, we want to formulate eigenvector-finding as an optimization problem.

Let $A$ be any symmetric matrix. If we minimize $\frac{\mathbf x^{\mathsf T}A \mathbf x}{\mathbf x^{\mathsf T} \mathbf x}$ over all nonzero $\mathbf x$ (or, equivalently, minimize $\mathbf x^{\mathsf T}A \mathbf x$ over all $\mathbf x$ with $\|\mathbf x\|=1$), then we get the smallest eigenvalue back, with $\mathbf x$ being its eigenvector.

This is true for the Laplacian matrix $L$, except this minimum won't be very interesting: the smallest eigenvalue is always $0$, and $(1,1,\dots,1)$ is always an eigenvector. We can look at the second-smallest eigenvalue instead, by adding an extra constraint on $\mathbf x$: we can ask that $x_1 + x_2 + \dots + x_n = 0$, so that it's perpendicular to the eigenvector of the smallest eigenvalue.

So now we have a description of the Fiedler vector of the graph: it is the vector $\mathbf x$ that minimizes $\mathbf x^{\mathsf T}L\mathbf x$ subject to $\|\mathbf x\|=1$ and $x_1 + x_2 + \dots + x_n = 0$. To make this more helpful, note that $\mathbf x^{\mathsf T}L\mathbf x$ can be written as $\sum_{ij \in E} (x_i - x_j)^2$.

The conditions $\|\mathbf x\|=1$ and $x_1 + x_2 + \dots + x_n = 0$ tell us that the Fiedler vector has to have "enough" positive and negative components; they can't all be the same. Since we're minimizing $\sum_{ij \in E} (x_i - x_j)^2$, we want to make the components on any two adjacent vertices as close together as possible.

So the Fiedler vector ends up painting the graph in a gradient that goes from positive to negative. Each individual value $x_i$ doesn't mean much by itself. But the relative values do: clusters of vertices that are close together get similar values, and far-apart vertices often get different values.

For the next eigenvector, we will add an additional constraint to our problem: we'll be looking for a vector $\mathbf y$ perpendicular to the Fiedler vector. Essentially, this says that $\mathbf y$ should have similar properties, but be different from the thing we found just now, describing a different feature of the graph.

For example, if our graph has three big and sparsely connected clusters, the Fiedler vector might assign positive values to one cluster and negative values to the other two. The next eigenvector might choose a different cluster to separate from the other two clusters. This distinguishes all the clusters, so the eigenvector after that will have to find some inter-cluster separation...

2
On

As a complementary answer, one can think about random walks on $G$. The ordinary random walk on $G$, where at each time step the process makes a transition uniformly at random, is actually not so intimately connected with the ordinary Laplacian matrix $L$ (as was discussed in the comments on Misha Lavrov's answer). It is instead connected with the symmetric normalized Laplacian matrix $L_{sym}$. The matrix $L_{sym}$ is related to the transition probability matrix of the random walk $P$ by the identity $L_{sym}=I-D^{1/2} P D^{-1/2}$. The small nonzero eigenvalues of $L_{sym}$ correspond to near-$1$ eigenvalues of $P$. Because $p(t)=p(0) P^t$, the left eigenvectors of $P$ with eigenvalues near $1$ correspond to "slow probability currents" between the vertices where the eigenvector is positive and the vertices where the eigenvector is negative, with relatively more probability lost or gained at a vertex where the absolute value of the eigenvector is larger.

Just about everything I said above does in fact apply to the Laplacian, once you change the setting. Specifically, the Laplacian is really connected to a continuous time Markov chain on $G$ where each edge is a possible transition, with all edges having unit rate. Thus in this situation the typical time to stay at a high degree vertex is lower than at a low degree vertex (unlike for the ordinary random walk). The generator matrix, say $A$, of this CTMC is the negative of the Laplacian (its diagonal has negative entries), and the probability distribution evolves as $p'=pA$ so $p(t)=p(0) e^{At}=p(0) e^{-Lt}$. So now in this setting the left eigenvectors of $L$ with eigenvalues close to zero (which are the left eigenvectors of the adjacency matrix with eigenvalues close to $1$) generate the slow probability currents.