I was reading Axler's book, and was wondering why he starts a chapter discussing Eigenvalues and Upper Triangular matrices with a discussion about invariant subspaces. I was trying to understand the deep connection here.
Say $T$ is some linear operator $T: \mathscr{L}(V)$ on some vector space $V$. Let $U \subset V$. Then $T$ is invariant on $U$ if $Tu \in U$.
So all linear operators such as $T: \mathscr{L}(V)$ have atleast two invariant subspaces: the kernel and the range of the operator. But that is not what I am interested in.
So obviously the existence of some invariant subspace other than the null and kernel is causing some operators to have eigenvalues. I was just wondering if all operators with some invariant subspace other than the null and kernel also can take an upper triangular form?
Again, I am just trying to understand the relationship between invariant subspaces and upper triangular matrices.
In Linear Algebra Done Right, I started the chapter on eigenvectors and eigenvalues with a brief discussion of invariant subspaces to provide motivation. To understand an operator from a finite-dimensional vector space to itself, one can try to decompose the vector space into a direct sum of subspaces of smaller dimension and then study the operator restricted to each of those subspaces. Those restrictions are operators (meaning that they map each subspace into itself) if and only if the subspaces are invariant under the operator.
The simplest possible nonzero invariant subspaces are one-dimensional subspaces. Thinking about what it means for a one-dimensional subspace to be invariant immediately leads to the notion of eigenvector and eigenvalue.
For operators on complex finite-dimensional vector spaces, this approach culminates in a beautiful decomposition of the vector space as a direct sum of generalized eigenspaces; see Theorem 8.21 (in the third edition).