Confusion about the description of generalized eigenspace

353 Views Asked by At

When learning about generalized eigenspace, there are two statements from two different textbooks which I was learning, seems contradict to the other one.

In Chapter 8 of the book Linear Algebra Done Right, 3rd edition by Sheldon Alxer, a theorem(Theorem 8.11 , Description of generalized eigenspaces) was stated like this

Suppose $\mathcal{T}\in \mathcal{L}(V)$ and $\lambda\in\mathbb{F}$. Then $G(\lambda, \mathcal{T})=\text{Null}(\mathcal{T}-\lambda \mathcal{I})^{\text{dim}V}$

Here $G(\lambda, \mathcal{T})$ means the generalized eigenspace of $\mathcal{T}$ corresponding to eigenvalue $\lambda$, and "Null" stands for the kernal space.

While in another textbook(Linear Algebra, Special for mathematic major by Shangzhi Li), there is also a similar theorem. Here's what it said (translated from Chinese, which can be a little bit unprecise)

Suppose an operator $\mathcal{T}$ defined on a n-dimensional linear space $V$ has $t$ different eigenvalues $\lambda_1,...,\lambda_t$, and the characteristic polynomial has the form$$P_\mathcal{T}(\lambda)=(\lambda-\lambda_1)^{n_1}...(\lambda-\lambda_t)^{n_t}.$$Then for each eigenvalule $\lambda_i(1\leq i\leq t)$, a subspace $\text{Null}(\mathcal{T}-\lambda_i \mathcal{I})^{n_i}$ was formed by a zero vector and all of the generalized eigenvectors with respect to $\lambda_i$, which has $n_i$ dimensions.

Now I'm getting quite unsure about the description of generalized eigenspace.

Comparing these two theorems. The first one states that the generalized eigenspace $G(\lambda, \mathcal{T})$ can be described as $\text{Null}(\mathcal{T}-\lambda\mathcal{I})^{\text{dim}V}$, while the second one suggests that the description of generalized eigenspace with respect to $\lambda_i$ seems ought to have the form $\text{Null}(\mathcal{T}-\lambda_i\mathcal{I})^{n_i}$, where $n_i$ is known to be the algebraic multiplicity of $\lambda_i$, appeared from the characteristic polynomial. However, it is clear that $\text{Null}(\mathcal{T}-\lambda_i\mathcal{I})^{n_i}\neq \text{Null}(\mathcal{T}-\lambda_i\mathcal{I})^{\text{dim}V}$ for some specified eigenvalue $\lambda_i$, which seems like a contradiction.

More interestingly, there is also an exercise problem in today's aftercalss-assignment which gives me an even more confusing conclusion.

Suppose $\mathcal{T}\in\mathcal{L}(V)$ has a minimal polynomial $$D_{\mathcal{T}}(\lambda)=(\lambda-\lambda_1)^{k_1}...(\lambda-\lambda_t)^{k_t}$$ Prove that $G(\lambda_i, \mathcal{T})=\text{Null}(\mathcal{T}-\lambda\mathcal{I})^{k_i}$

From now on I'm getting totally dizzy... is it possible that these three statements are not all correct? Or if I missed some important things? I do think it is not possible that $$\text{Null}(\mathcal{T}-\lambda_i \mathcal{I})^{\text{dim}V}=\text{Null}(\mathcal{T}-\lambda_i \mathcal{I})^{n_i}=\text{Null}(\mathcal{T}-\lambda_i \mathcal{I})^{k_i}$$

Can anyone help me with that? Thanks a lot!

1

There are 1 best solutions below

0
On

As Daniel says in the comments, all of these definitions are equivalent; that is,

$$\text{ker}(T - \lambda)^{\dim V} = \text{ker}(T - \lambda)^{n_i} = \text{ker}(T - \lambda)^{k_i}.$$

This can be seen using Jordan normal form but that's overkill. Let $m(t) = \prod (t - \lambda_i)^{k_i}$ be the minimal polynomial of $T$ and let $v \in \text{ker}(T - \lambda_j)^{\dim V}$ for some fixed $j$. Then we have

$$m(T) v = \left( \prod_{i \neq j} (T - \lambda_i)^{k_i} \right) (T - \lambda_j)^{k_j} v.$$

We want to show that $v_j = (T - \lambda_j)^{k_j} v = 0$. The identity above gives that $v_j$ lies in the kernel of $\prod_{i \neq j} (T - \lambda_i)^{k_i}$. This is contained in (and in fact is exactly) the sum of the generalized eigenspaces of each $\lambda_i, i \neq j$, and a basic fact about generalized eigenspaces of different eigenvalues is that they are linearly independent. On the other hand, $v_j$ lies in the generalized eigenspace of $\lambda_j$. Hence $v_j = 0$ as desired.

More explicitly, we can prove the following.

Lemma: Suppose $p, q \in \mathbb{C}[t]$ are two polynomials such that $p(T) v = q(T) v = 0$. Then $g = \gcd(p, q)$ also satisfies $g(T) v = 0$.

Proof. Abstractly the point is that $\{ p \in \mathbb{C}[t] : p(T) v = 0 \}$ is an ideal of $\mathbb{C}[t]$ and hence principal. Concretely, we can apply Bezout's lemma to find polynomials $a, b$ such that $ap + bq = g$, which gives $a(T) p(T) v + b(T) p(T) v = 0 = g(T) v$. $\Box$

So $v_j = (T - \lambda_j)^{k_j} v$ satisfies $(T - \lambda_j)^{\dim V - k_j} v_j = 0$, but it also satisfies $\left( \prod_{i \neq j} (T - \lambda_i)^k \right) v_j = 0$, and these two polynomials have no roots in common so their $\gcd$ is equal to $1$. Hence $v_j = 0$.