Let $L$ be the laplacian matrix of a graph $G$, i.e. $L = D - A$, where $D$ is the degree matrix, and $A$ the adjacency matrix. Let $v_i$ be an eigenvector of $L$. Let $x,y$ be two vertices of the graph $G$. Then, if $x,y$ are closely connected, are the $x^{th}$ and $y^{th}$ rows of $v_i$ close/similar for all $i$ ?
If yes, why? how? If not, then, how can this article be explained?
I'll try to provide a brief write up of the article linked above. The following problem is presented :
Let us consider an example. Suppose we have hundred balls, of different sizes, and each of them has a distinct colour which is indicated by a hex colour code, an integer from 000000 to ffffff. We would like to divide them into three groups based on their colour and volume or size, such that all the groups are of similar size and any pair of balls in a group have similar colour, i.e. the difference in their hex colour codes is small and similar radius measurement. This task begs for a solution using the approach of clustering.
To solve this problem, k-means is not a good idea as it does not take into consideration cluster connectivity. Thus, we take a look at spectral clustering technique.
Again, quoting from the article : (this is where I am facing problem)
Spectral Clustering is a clustering technique where the eigenvectors of the laplacian of similarity graph are studied to cluster data points. The intuition is that spectral clustering unlike k-means observes connectivity in the similarity graph. Therefore if two data points $x_i$ and $x_j$ belong to the same cluster then the row vectors of the k-eigenvector matrix are expected to be close by.
The article then goes on to explain an algorithm for spectral clustering and how it gives better results.
In the above figure, the first is a plot of the data points, the second is a plot of the rows of the matrix whose columns are the second and third eigenvector of laplacian. The weights used to build the similarity matrix is $e^{−\frac{(dist(x_i,x_j))^2}{2σ^2}}$.
