Question about graph laplacian quadratic form

326 Views Asked by At

While I was reading "The Emerging Field of Signal Processing on Graphs," I had a question about the graph Laplacian's quadratic form. The publication claims that enter image description here

I am wondering how the left side of equation (sigma form) simplifies to $f^T L f$ (matrix form). How can I recognize similar patterns (sigma form to matrix form) in the future? Thank you.

2

There are 2 best solutions below

0
On

I'm going to provide the definitions that I encountered in the publication to provide some context to your question. For the remainder of this answer, $G = (V, E, w)$ denotes a weighted graph and $\mathbf f$ a real-valued function on $V$ (referred to as a "signal" in the publication)

The edge derivative of a signal $\mathbf f$ with respect to an edge $e = i \sim j$ at vertex $i$ is defined as $$ \left. \frac{\partial \mathbf f}{\partial e} \right|_i = (\mathbf f(j) - \mathbf f(i)) \sqrt{w(i, j)} $$

The graph gradient of $\mathbf f$ at $i \in V$ is the vector $\nabla_i \mathbf f$ with $e$-th entry $$ [\nabla_i \mathbf f](e) = \left. \frac{\partial \mathbf f}{\partial e} \right|_i $$

The local variation of $\mathbf f$ at vertex $i$ is $$\begin{align*} \left\lVert \nabla_i \mathbf f \right\rVert_2 &= \left( \sum_{e \in E} \left( \left. \frac{\partial \mathbf f}{\partial e} \right|_i \right)^2 \right)^{1/2} \\ &= \left( \sum_{j \sim i} w(i, j) (\mathbf f(j) - \mathbf f(i))^2 \right)^{1/2}. \end{align*}$$

The local variation is supposed to give some notion of the "smoothness" of a signal near $i$.

The discrete $p$-Dirichlet form is a notion of the global smoothness of a signal defined as $$\begin{align*} S_p(\mathbf f) &= \frac 1p \sum_{i \in V} \left\lVert \nabla_i \mathbf f \right\rVert^p_2 \\ &= \frac 1p \sum_{i \in V} \left( \sum_{j \sim i} w(i, j) (\mathbf f(j) - \mathbf f(i))^2 \right)^{p/2}. \end{align*}$$

I'm not very familiar with signal processing and I don't know what "sigma form" is, but the definition of the Laplacian matrix is actually based on the Dirichlet form. The Laplacian matrix is typically characterized as $L = D - A$, where $D$ is the degree matrix of a graph and $A$ the adjacency matrix, but its original definition was as a linear operator on the space of functions $V \mapsto \mathbb F$. For a function $\mathbf f: V \to \mathbb F$, the Laplacian is defined by the rule $$ [L \mathbf f](u) = \sum_{v \sim u} w(u, v) (\mathbf f(u) - \mathbf f(v)) $$ Reasoning that $$ \mathbf f^T L \mathbf f = \sum_{i \sim j} w(i, j) (\mathbf f(j) - \mathbf f(i))^2 $$ follows pretty easily. As for why the $1/2$ vanishes, $(i, j)$ and $(j, i)$ are counted as separate edges, so the $1/2$ term accounts for double counting each edge $i \sim j$.

The definition of $L$ as $D - A$ actually came after mathematicians had already defined the Laplacian as a product of boundary operators. There are several texts that explore these concepts further, I recommend the first chapter of Chung's Spectral Graph Theory or section 1.2.1 of the more recent Higher-Order Systems.

0
On

Here is a short algebraic argument. Note that the weighted graph Laplacian is defined in the source as $\mathcal L = D - W$, where $W_{i,j}$ is the weight of the edge between $i$ and $j$ (or $0$ if there is no such edge) and $D$ is the diagonal matrix where $D_{i,i}$ is the sum of weights on edges with endpoint at $i$.

Then \begin{align} S_2(\mathbf f) &= \frac12 \sum_{i \in V} \sum_{j \in \mathcal N_i} W_{i,j} [ f(i) - f(j)]^2 \\ &= \frac12 \sum_{i \in V} \sum_{j \in V} W_{i,j} [ f(i) - f(j)]^2 \\ &= \frac12 \sum_{i \in V} \sum_{j \in V} W_{i,j} [f(i)^2 - 2 f(i)f(j) + f(j)^2] \\ &= \frac12 \sum_{i \in V} \sum_{j \in V} W_{i,j} f(i)^2 - \frac12 \sum_{i \in V} \sum_{j \in V} 2 W_{i,j} f(i)f(j) + \frac12 \sum_{i \in V} \sum_{j \in V} W_{i,j} f(j)^2 \\ &= \frac12 \sum_{i \in V} D_{i,i} f(i)^2 - \sum_{i \in V} \sum_{j \in V} W_{i,j} f(i)f(j) + \frac12 \sum_{j \in V} D_{j,j} f(j)^2 \\ &= \sum_{i \in V} D_{i,i} f(i)^2 - \sum_{i \in V} \sum_{j \in V} W_{i,j} f(i)f(j) \\ &= \mathbf f^{\mathsf T}\! D\mathbf f - \mathbf f^{\mathsf T}W\mathbf f \\ &= \mathbf f^{\mathsf T}\! \mathcal L \mathbf f. \end{align}

A few notable places where justification could be useful:

  • We can replace the sum over $j \in \mathcal N_i$ by a sum over $j \in V$ because $W_{i,j}$ will be $0$ in cases where $j \notin \mathcal N_i$, anyway.
  • The sum $\sum_{j \in V} W_{i,j}$ is precisely the diagonal entry $D_{i,i}$ as it is defined in the definition of $\mathcal L$.
  • In general, the quadratic form $\mathbf x^{\mathsf T}\!A \mathbf y$ can be written as a sum $\sum_i \sum_j A_{ij} x_i y_j$. If $A$ is a diagonal matrix, most terms vanish and we are left with just $\sum_i A_{ii} x_i y_i$.