Context: I am reading about graph Laplacian matrices $L = D - A$ and how their eigenvectors correspond to Fourier modes. From spectral decomposition, $ L = U \Lambda U^{T}$, where $\Lambda$ is a diagonal matrix of the eigenvalues and $U$ is a matrix of the eigenvectors.
Then we can compute the Fourier transform of a signal $\mathbf{f}$ by doing the following calculation: $\mathbf{s} = \mathbf{U}^{T} \mathbf{f}$. Reference: pg. 83 of the following notes here
Question: How does the graph Fourier transform retain structural information about the graph?
Do the eigenvectors/eigenvector matrix change for different Laplacian matrices?
It sounds silly, but the Fourier modes seem like they would be the same across different Laplacian matrices. My understanding was that they were similar to terms within the DFT matrix. If, in fact, the eigenvectors of the Laplacian did change between different graphs, then I suppose those eigenvectors would retain structural information about the graph.
Any help would be greatly appreciated.
You have many questions!
The set of eigenvectors and eigenvalues indeed capture information about the structure of the graph. That's why they're so useful, because you can learn a lot about a graph by studying the eigenvalues and eigenvectors. They capture information at multiple scales of the shape, connectivity, bottleneck in the graph etc.
The eigenvectors and eigenvalues almost always change for different graphs (as those almost always generate different Laplacian matrices). For instance, take a graph obtained by discretizing points on a circle (uniform grid). Then the eigenvectors are the classical discrete Fourier basis. But if your graph was two disconnected identical subgraphs, then the eigenvalues would all be duplicated, and the eigenvectors would be obtained from the eigenvectors from each subgraph. Very different from the first graph we considered.
People asked if eigenvalues alone are enough to retain all the information of a graph. The answer is no. See can you hear the shape of a drum?.
But with eigenvectors and eigenvalues, you reconstruct the whole Laplacian matrix and therefore the adjacency matrix, i.e. the whole graph. Why does it work? Eigenvectors of the Laplacian are related to the heat equation. They are the solution of the diffusion of heat (or particles) on your graph. Intuitively: Imagine a spike of heat at a given node A in your graph. With the eigenvectors, you can understand how quickly the heat propagates throughout the graph, and how the temperature at each node evolves. That temperature will change more or less rapidly depending on how connected that node is to the initial node A where the spike occurred. If the node is highly connected to A (many path connecting those nodes), then the temperature will rise quickly. Otherwise it will go slowly. So your eigenvectors capture the level of connectivity between nodes. The different eigenvectors measure it at different scales (from coarse to fine). This is how you can recover the most intimate structures of the graph.
Note: More interesting than reconstructing the whole graph, you can also compress it by retaining only a few eigenvectors and eigenvalues. This is equivalent to applying a low pass filter to a graph instead of a signal.
Edit: Adding some background on how to interpret the eigenvectors.
Each eigenvector is a function that's defined on the graph nodes. Eigenvectors tend to oscillate along the graph, and as you go towards larger eigenvalues, the number of oscillation (frequency) of the corresponding eigenvector increases as well (starting with the first eigenvector being the constant function). And the direction of the oscillations depend on the graph's structure. The first few eigenvectors will tend to oscillate along the directions that tend to "separate" the graph into components that are poorly connected between each other. And as the eigenvalue increases, the eigenvectors will capture other, finer structures, just like the regular Fourier basis captures finer and finer details of a signal. That's what I mean by capturing the graph structures at different scales. See an example on the dumbbell graph below.