In spectral graph theory, given $N \times N$ graph Laplacian matrix $L$ and the graph signal $f \in \mathbb R^N$ , the graph Fourier transform is defined as:
$\hat{f}(l)=\langle u_l , f\rangle$ (inner product)
where $L=U \Lambda U^{-1}$, $u_l$ is a column of $U$ ($l=0,...,N-1$).
But in some places, it is written as:
$\hat{f}(\lambda_l)=\langle u_l , f\rangle$.
I'm confused about this second notation. I think this $\hat{f}(\lambda_l)$ notation is ill-defined.
Consider a graph with the Laplacian $L$ in the image below. The corresponding eigen decomposition is shown also.
Then let's define a graph signal $f=[1,2,3,4]^T$.
The graph Fourier transform result is shown:
This is very strange.
$\hat{f}(\lambda_2)=2, \hat{f}(\lambda_3)=-0.7071$.
But since $\lambda_2=\lambda_3=4$, we have $2=-0.7071$.
More formal proof:
Assume $L$ has an eigenvalue with multiplicity greater than 1(as in the image above). Without loss of generality, let $\lambda_2 = \lambda_3$. Then $\hat{f}(\lambda_2) = \hat{f}(\lambda_3)$.
By the definition, $\langle u_2 , f\rangle=\langle u_3 , f\rangle$.
Since $f$ is an arbitrary vector and $f$ has nothing to do with $L$ or $U$, we have $u_2 = u_3$.
Then this contradicts that $L=U \Lambda U^{-1}$.
What is wrong here?
Please help

