How does this prove: All eigenvalues of a triangular matrix = All its diagonal entries?

22.7k Views Asked by At

Source: Linear Algebra by David Lay (4 edn 2011). p. 269 Theorem 5.1.1.

For simplicity, consider the $3\times 3$ case. If $A$ is upper triangular, then $ A-\lambda I= \begin{bmatrix} a_{11} & a_{12} & a_{13}\\ 0 & a_{22} & a_{23}\\ 0 & 0 & a_{33} \end{bmatrix} - \begin{bmatrix} \lambda & 0 & 0\\ 0 & \lambda & 0\\ 0 & 0 & \lambda \end{bmatrix} =\begin{bmatrix} a_{11}-\lambda & a_{12} & a_{13}\\ 0 & a_{22}-\lambda & a_{23}\\ 0 & 0 & a_{33}-\lambda \\ \end{bmatrix}$

$\lambda$ is an eigenvalue of $A \iff$ The equation $(A-\lambda I)x=0$ has a nontrivial solution. $\iff$ $(A-\lambda I)x=0 $ has a free variable $ \iff $
Because of the zero entries in $A-\lambda I$, at least one of the entries on the diagonal of $A-\lambda I$ is zero.
$\color{red}{\iff} \lambda$ equals $\color{red}{one \, of } $ the entries $a_{11},\ a_{22},\ a_{33}$ in $A$. For the lower triangular case, see Question 5.1.28.

$1.$ I don't understand the red $\color{red}{\iff}$. $a_{11} - \lambda = 0 \iff$ The first column is $\mathbf{0}. \iff$ $x_1$ is a free variable. But what about the other entries?

$2.$ I linked an analogous question. How does the proof overhead proves that all of the eigenvalues = all its diagonal entries, when it states $\color{red}{one \, of } $?

3

There are 3 best solutions below

1
On

Not sure I get your question perfectly but :

$\lambda$ is an eigenvalue of A $\Leftrightarrow \lambda $ is a root of the characteristic polynom of A $\Leftrightarrow |A -\lambda Id| =0 \Leftrightarrow \Pi_{i}^n (a_{ii}-\lambda)=0 \Leftrightarrow \exists i\ \lambda =a_{ii}$.

Most equivalence are quite obvious ( you need to know that the determinant of a tringular matrix is the product of its diagonal elements and that the kernel of a matrix which determinant is 0 has a non trivial kernel).

1
On

Expand $\det(A - \lambda I)$ using minors of the first column.

The only term that survives is $(a_{11}-\lambda)\det(B)$ where:

$B = \begin{bmatrix}a_{22}-\lambda&a_{23}\\0&a_{33}-\lambda \end{bmatrix}$, which clearly has determinant:

$(a_{22} - \lambda)(a_{33} - \lambda)$.

A similar strategy works for any $n \times n$ upper triangular matrix.

This shows that every eigenvalue (root of $\det(A - \lambda I)$) is a diagonal entry of $A$ and vice-versa.

Surely you can see that (in the $3\times3$ case) if $a_{33} - \lambda = 0$ that the last ROW is $0$, recall column rank = row rank.

If $a_{22} - \lambda = 0$, then the 2nd row is a scalar multiple of the 3rd row, so after row-reduction, we'll have at LEAST one zero row at the bottom.

The logic is a bit more involved for an $n \times n$ upper triangular matrix, but if one of the diagonal elements of $A - \lambda I$ is $0$, it should be clear that THAT row is a linear combination of the rows below it.

1
On

It's equivalent to showing that an upper triangular matrix is injective iff none of its diagonal entries are $0$.

If one of the diagonal entries, say the $i$th, is $0$, then restricting the operator to the span of the first $i$ basis vectors gives a map into the span of the first $i-1$ basis vectors, which by Rank-Nullity cannot be injective.

Now, if all of the diagonal entries are nonzero, then the columns must be independent: the last column is the only one with a nonzero $n$th entry, so its coefficient has to be $0$. Continuing this way, you can kill all the coefficients.