Suppose we have a simple graph $G$ with no isolated nodes (i.e. each node $v$ has $deg(v) \geq 1$)
In deep learning, in particular graph convolution algorithms, it seems popular as one form of normalization to multiply features per node of a graph by $D^{-1}A$ (for $D$ the degree matrix of $A$) rather than $A$. An explanation given by Kipf here notes:
The second major limitation is that $A$ is typically not normalized and therefore the multiplication with $A$ will completely change the scale of the feature vectors (we can understand that by looking at the eigenvalues of $A$ ). Normalizing $A$ such that all rows sum to one, i.e. $D^{−1}A$ , where $D$ is the diagonal node degree matrix, gets rid of this problem.
I see why $A$ may have eigenvalues larger than $1$, for instance the adjacency matrix: $$\begin{pmatrix}0 & 1 & 0\\\ 1 & 0 & 1\\ 0 & 1 & 0\end{pmatrix}$$ has largest eigenvalue $\sqrt{2}$, causing features multiplied with $A$ to explode as layers stack on, similar to weight matrices in recurrent neural networks, but I'm not sure what the range of eigenvalues of $D^{-1} A$ is. Any insights or references appreciated.
$\def\eqdef{\stackrel{\text{def}}{=}}$ The study of the eigenvalues and eigenvalues and related properties of non-negative matrices is known as Perron-Frobenius theory. Your matrix $\ D^{-1}A\ $ is row-stochastic, and the theory tells us that the magnitudes of the eigenvalues of such matrices all lie within the interval $\ [0,1]\ $. The column vector $\ \mathbf{1}\eqdef(1,1,\dots,1)^T\ $ is obviously a right eigenvector of $\ D^{-1}A\ $ corresponding to the eigenvalue $\ 1\ $. When the matrix has some special properties, such as irreducibility or primitivity, a little more can be said about its eigenvalues and eigenvectors. Your matrix $\ D^{-1}A\ $ will be irreducible if its corresponding graph is connected, and it will be primitive if, in addition, the graph contains a cycle with an odd number of edges.
There's an extensive literature on the Perron-Frobenius theory, and you'll find a comprehensive exposition in any good book on non-negative matrices. I can recommend Eugene Seneta's text, Non-Negative Matrices: An Introduction to Theory and Applications.