Hoffman Theorem of Diagonalizable operators

667 Views Asked by At

Lemma: Let $V$ be a finite-dimensional vector space over the field $F$. Let $T$ be linear operator on $V$ such that the minimal polynomial for $T$ is a product of linear factors

$$p = (x-c_{1})^r_{1}...(x-c_{k})^{r_{k}}, c_{i} \in F.$$

Let $W$ be a proper subspace of $V$ which is invariant under $T$. there exists a vector $\alpha$ in $V$ such that

a) $\alpha\notin W$

b) $(T-cI)\alpha\in W$, for some characteristic value $c$ of the operator $T$

then follows the Theorem

Let $V$ be a finite-dimensional vector space over the field $F$. Let $T$ be linear operator on $V$. Then $T$ is triangulable if and only if the minimal polynomial for $T$ is a product of linear polynomials over $F$.

and then the proof: suppose that the minimal polynomial factors

$$p = (x-c_{1})^r_{1}...(x-c_{k})^{r_{k}}$$

By repeated application of the lemma, we shall arrive at and ordered basis $V = \{ v_{1},...,v_{k} \}$ in which the matrix representing $T$ is upper-triangular

$[T]_{V} = \left[\begin{array}{ccccc} a_{11} & a_{12} & a_{13} & ... & a_{1n}\\ 0 & a_{22} & a_{23} & ... & a_{2n}\\ 0 & 0 & a_{33} & ... & a_{3n}\\ ... & ... & ...& ... &... \\ 0 & 0 & 0 & ... & a_{nn}\\ \end{array} \right ]$

I don’t understand how to use the lemma to compute that upper-triangular matrix $[T]_{V}$, anyone could help?

1

There are 1 best solutions below

0
On BEST ANSWER

The matrix of $T$ with respect to an ordered basis $[v_1,\ldots,v_n]$ is upper triangular if and only if for $k=0,1,\ldots,n$ the subspace $\langle v_1,\ldots,v_k\rangle$ if $T$-invariant (this is immediately checked). Regardless of the $T$-invariance, such a chain of subspaces from $0$ to $V$ (each one containing the one before and having dimension increased by one) is called a complete flag of subspaces.

So the goal is to find a complete flag of $T$-invariant subspaces; once this is done we can choose $v_k$ to lie in the $k$-dimensional space of the flag but not in the previous one. Starting with the $0$-dimensional subspace, which is obviously $T$-invariant, repeated application of the lemma will each time give us a new vector $\alpha$ so that added to our previous $T$-invariant subspace (call it $W$) we get a new $T$-invariant subspace $\langle W,\alpha\rangle$ (or $W+\langle\alpha\rangle$ if you prefer). This only stops when one reaches the whole space $V$, and then one is done.


By the way, I would prefer to show existence of such a $T$-invariant complete flag by direct induction on the dimension. The trick is not to look for an ordinary (right) eigenvector, but to look for a left eigenvector, a linear form $f:V\to F$ such that $f\circ T=\lambda f$; as long as the characteristic polynomial$~\chi\in F[X]$ of the transpose of$~T$ (which is the same as that of $T$) has a root$~\lambda$, such a left eigenvector is assured to exist. Now $\ker f$ is a $T$-invariant hyperplane$~H$ and the characteristic polynomial of the restriction of$~T$ to$~H$ is the quotient $\chi\big/(X-\lambda)$. If, as assumed, $\chi$ splits into linear factors over$~F$, this is still the case with the quotient, so we can apply our induction hypothesis to the restriction of$~T$ to$~H$, and get (the rest of) our $T$-invariant complete flag.

Note that this constructs the flag in the opposite order than the lemma does. The lemma essentially looks for a right eigenvector, but rather than using restriction to a subspace, it must find the eigenvector in the quotient of $V$ by the $T$-invariant subspace $W$ (and lift it back to $V$, where it is no longer a true eigenvector).