I have good intuition regarding the eigenvectors and eigenvalues for a matrix describing a transformation in a Hilbert space and those of correlation or covariance matrices.
For Hilbert space transformations, the eigenvectors are the new unit vectors in which the transformation becomes just scaling, and the scaling factors are the eigenvalues.
For a correlation matrix, or a covariance matrix, it means the vectors which transform the data into independent axis, i.e. the coordinates in which the data no longer correlates, and the eigenvalues represent how much information the corresponding vector holds.
What's the intuition for the meaning of the eigenvectors and eigenvalues of an adjacency matrix?
Why does "dimensionality reduction" i.e. picking those vectors who's eigenvalues are the greatest and using them to represent the whole graph work on adjacency matrices?