In my probability course, we were discussing applications of Markov Chains to computer science -- in particular, how the PageRank algorithm goes about finding stationary distributions, and thus, ranks the popularity of webpages.
Let $G$ be a directed graph (the internet) with $n$ vertices, each representing a different webpage, and an edge from vertex $u$ to vertex $v$ if and only if webpage $u$ links to webpage $v$. We then consider the adjacency matrix of this graph -- an $n \times n$ matrix where entry $(i, j)$ has a $1$ if vertex $i$ has a link to vertex $j$, and $0$ otherwise.
Letting $A$ be the adjacency matrix of $G$, our professor stated that "because of the kind of matrix that $A$ is," it will have $n$ linearly independent eigenvectors corresponding to eigenvalues $\lambda_1, \, \lambda_2, \, \dots, \, \lambda_n$ such that one of the eigenvalues has magnitude $1$ and the others have magnitude between $0$ and $1$ (exclusive).
My question is: what is it about the matrix $A$ that allows us to make this assumption? Does it have to do with the sparsity of this matrix? Is it positive-definite? Can we also assume that $A$ is invertible?
If you haven't misquoted your professor's words, he/she was plainly wrong. The adjacency matrix $$ \left[\begin{array}{cc|cc|cc}0&1\\ 0&0\\ \hline&&0&1\\ &&1&0\\ \hline&&&&0&1&1\\ &&&&1&0&1\\ &&&&1&1&0\end{array}\right], $$ for instance, has not a complete eigenbasis. It also possesses a negative eigenvalue ($=-1$) and an eigenvalue ($=2$) greater than $1$.
In Google's very first (and the only published) version of PageRank algorithm, the resultant matrix of interest is actually not an adjacency matrix, but a matrix in the form of $M=\frac{1-d}{n}J+dDA$, where $J$ is the all-one matrix and $D$ is a diagonal matrix so that $\frac1{D_{jj}}$ is the row sum of the $i$-th row of the adjacency matrix $A$ and $d\in(0,1)$ is a coefficient for convex combination. The result is an entrywise positive stochastic matrix. Many nice properties of entrywise positive matrices follow from Perron-Frobenius theorem. In the case of PageRank, the theorem guarantees that the eigenvector used for ranking always exists and is unique (up to scaling).
For a more detailed discussion, you may read a rather popular feature column from AMS. (However, beware that there was a subtle error about the second eigenvalue in the article. I don't know if this error has been corrected or not.)