I know about applications of Linear Algebra to Graph Theory, I find them boring. What interests me is whether one can draw graph-like pictures of linear functions to understand them better.
Do you know of any results like that?
I have one particular question I would like to know the answer to:
Let $f : V \rightarrow V$ be a linear function and $b_1,...,b_n \in V$ a basis of $V$. Also for every $v \in V$ define $v_1,...,v_n$ so that $v_1 b_1 + ... + v_n b_n = v$. Finally let $G = (B,E)$ be the graph with $B = \{b_1,...,b_n\}$ and $E = \{ (b_i, b_j) \text{ with weight } f(b_i)_j \mid i,j \in \{1,...,n\} \}$. In words: draw a circle for every basis element and connect them so that you can see how $f$ maps the basis elements to each other.
Now delete all weights that are zero and assume the other weights are positive. Can we say something like: There is a cycle in $G$ if and only if $f$ has an eigenvector? To me that sounds like the Perron–Frobenius theorem .
I'm also wondering if one could prove the existence of Jordan-Normal-Forms using graphs like this. (generalized eigenvectors are then maybe cycles connected by a tree)
In general I feel like there should be a graph-theoretic perspective on the (basic) concepts I've seen in linear algebra. What do you think?
The idea of a chordal graph is useful in numerical linear algebra. If an invertible matrix has a chordal sparsity pattern, then it has a Cholesky factorization with no fill-in (so that sparsity is not lost -- the Cholesky factors are just as sparse as the original matrix).