On page 399 of Zaki & Meira$^\color{red}{\star}$, equation 16.7, I'm having a hard time understanding the symmetric graph Laplacian matrix given by
$$L^{sym}=\Delta^{-1/2}L\Delta^{-1/2}$$
Since $\Delta^{-1/2}$ is a diagonal matrix, then so will be $L^{sym}$. But this similar definition conflicts with that.
$$ L^{sym}=\begin{bmatrix} \frac{1}{\delta_1}\sum\limits_{j = 1, j\ne 1}^n A_{1j} & -\frac{A_{12}}{\sqrt{\delta_1\delta_2}} & \ldots & -\frac{A_{1n}}{\sqrt{\delta_1\delta_n}} \\ -\frac{A_{21}}{\sqrt{\delta_2\delta_1}} & \frac{1}{\delta_2}\sum\limits_{j = 1, j\ne 2}^nA_{2j} & \ldots & -\frac{A_{2n}}{\sqrt{\delta_2\delta_n}} \\ \vdots & \vdots & \ddots & \vdots \\ -\frac{A_{n1}}{\sqrt{\delta_n\delta_1}} & -\frac{A_{n2}}{\sqrt{\delta_n\delta_2}} & \ldots & \frac{1}{\delta_n}\sum\limits_{j = 1, j\ne n}^nA_{nj} \end{bmatrix} $$
Python (using numpy as np) even has disagreements with this.
A = np.array([
[1,2,3],
[2,4,5],
[3,5,6]]
)
# This Delta matrix is [6, 11, 14] in diagonal form
Delta = np.diag([sum(row) for row in A])
L = Delta - A
L_sym = (Delta ** -0.5) * L * (Delta ** -0.5)
Which will set L_sym to a diagonal matrix.
I'm pretty confused with this so any help is much appreciated. Thanks in advance!
$\color{red}{\star}$ Mohammed J. Zaki, Wagner Meira, Jr, Data Mining and Machine Learning — Fundamental concepts and algorithms (2nd edition) [10 MB PDF], Cambridge University Press, March 2020. ISBN: 978-1108473989.
The two definitions that you give for the symmetric Laplacian $L^{sym}$ are equivalent. To show this, we'll use two facts. First, let's see what happens to a matrix $A$ when we multiply it by a diagonal matrix $D$. If we multiply $A$ by $D$ on the left (i.e. $DA$), then we are scaling the rows of $A$ by the corresponding the diagonal entry. Similarly, if we multiply $A$ by $D$ on the right $DA$, we are scaling the columns of $A$ by the corresponding diagonal entry. The second fact is that if $D$ is a diagonal matrix, then $D^{-1/2}$ is simply taking inverse square roots of the diagonal entries (this is not true for a general matrix $A$). Using the definition of the original Laplacian $L$ and these two facts, the equivalence between your two definitions should be clear.
I think there are two places you might have had confusion. The first is what happens when you multiply a matrix by a diagonal matrix -- the result is not necessarily a diagonal matrix, just a row or column scaling of our original matrix, as mentioned above. Second, your Python code is incorrect. You currently have
when you should be writing
The reason is that * corresponds to element-wise multiplication while np.dot corresponds to matrix and vector multiplications. In the future, be careful with the way you are taking $D^{-1/2}$ because I think ** is element-wise. As mentioned beforehand, this is the same thing for diagonal matrices but not for a general matrix. Lastly, your adjacency matrix should have zeros on the diagonal if your graph is simple (i.e. has no self edges).