Generalized Eigenspaces Associated to Different Values

102 Views Asked by At

Let $V$ be some vector space, and $T \in \mathcal{L}(V)$. If $a \neq b$, then $G(a,T) \cap G(b,T) = \{0\}$, where $G(b,T)$ denotes the generalized eigenspace.

I am having a lot of trouble with this problem; I've been thinking about it for the past few days with no progress. Here is what I've come up with:

If $v \in G(\alpha,T) \cap G(\beta,T)$, then there $i,j \in \Bbb{N}$ such that $(T-aI)^i v =0$ and $(T-bI)^j v = 0$. WLOG, let $i$ and $j$ be the smallest such integers and suppose $i < j$. Then

$$w_1 := (T-aI)^{i-1}v \neq 0$$

and

$$w_2 := (T-aI)^{j-1} \neq 0$$

Note that $(T-aI)w_1 =0$ and therefore $Tw_1 = aw_1$; similarly, $Tw_2 = bw_2$...

I've played with this equations in various ways. My strategy = is to get $(\mbox{something non-zero}) \cdot v = 0$, which will obviously force $v=0$; but this is proving extremely difficult. My hope is that I can get $(a-b)v=0$, but I don't see how to do it.

I could use some help...

3

There are 3 best solutions below

1
On BEST ANSWER

Following your set up, let $v\in G(a,T)\cap G(b,T)$ and let $i$ be the smallest number such that $(T-aI)^iv=0$. Let $w=(T-aI)^{i-1}v\neq 0$. Then $(T-aI)w=0$ and so $Tw=av$. As such for $\lambda \in \mathbb{F}$ and $n\in \mathbb{N}$, $(T-\lambda I)^nw=(a-\lambda)^n w$. Now let $j\in\mathbb{N}$ such that $(T-bI)^jv=0$. Now, $(T-aI)^{i-1}(T-bI)^jv=0$. However, both of these are polynomials of $T$ and therefore commute (if you are using Linear Algebra Done Right by Axler as I suspect you are this is 5.20 on page 144). As such, $0=(T-bI)^j(T-aI)^{i-1}v=(T-bI)^jw=(a-b)^jw$. However, that implies that $a=b$ which is a contradiction. As such, no such $v$ exists. (If you are using Axler this proof is modelled off of Axler's proof of 8.13).

1
On

I guess the cheapest way is the Bezout identity. The polynomials with coefficients in a field, say the reals or the complexes, form a Euclidean ring. The units in the ring are the nonzero constants, i.e. the nonzero field elements.

Added: after flipping through Dummit and Foote, let me add that the "Bezout" stuff is the theorem that any Euclidean Domain is also a Principal Ideal Domain. In the paragraphs below, this means that the ideal in $F[x]$ generated by $(x-a)^m$ and $(x-b)^n$ must be the entire ring, in particular contain the constant polynomial $1.$

For univariate polynomials f and g with coefficients in a field, there exist polynomials a and b such that af + bg = 1 if and only if f and g have no common root in any algebraically closed field (commonly the field of complex numbers).

Given $a \neq b,$ we have $x-a$ and $x-b$ coprime. In particular, $b-a \neq 0$ and $(x-a) - (x-b) = b-a$ is a constant. Being a bit more careful, $$ \frac{x-a}{b-a} - \frac{x-b}{b-a} = 1 $$

Anyway, $(x-a)^m$ and $(x-b)^n$ are coprime as well. Let me check if I know a quick proof for that, there may be a one-liner. (For the moment, I just think that they have no roots in common i any field extension, all roots are accounted for, so no common factor can have any roots, i.e. is a constant). We will have polynomials $p,q$ such that $$ p(x) (x-a)^m - q(x) (x-b)^n = 1. $$ We switch this back to $$ \color{red}{ p(T) (T-aI)^m - q(T) (T-bI)^n = I.} $$ Therefore, if $(T - aI)^m v = 0$ and $(T - bI)^n v = 0,$ it follows that $Iv=0$ and $v=0.$

==============================================================

I did a sample Bezout for $(x-1)^5$ and $(x-2)^3.$ The final outcome was

$$ \left( x^{5} - 5 x^{4} + 10 x^{3} - 10 x^{2} + 5 x + 1 \right) \left( \frac{ - 5 x^{2} + 5 x + 19 }{ 27 } \right) - \left( x^{3} - 6 x^{2} + 12 x - 8 \right) \left( \frac{ - 5 x^{4} + 4 x^{2} - 11 x + 1 }{ 27 } \right) = \left( 1 \right) $$

=====================================

$$ \left( x^{5} - 5 x^{4} + 10 x^{3} - 10 x^{2} + 5 x + 1 \right) $$

$$ \left( x^{3} - 6 x^{2} + 12 x - 8 \right) $$

$$ \left( x^{5} - 5 x^{4} + 10 x^{3} - 10 x^{2} + 5 x + 1 \right) = \left( x^{3} - 6 x^{2} + 12 x - 8 \right) \cdot \color{magenta}{ \left( x^{2} + x + 4 \right) } + \left( 10 x^{2} - 35 x + 33 \right) $$ $$ \left( x^{3} - 6 x^{2} + 12 x - 8 \right) = \left( 10 x^{2} - 35 x + 33 \right) \cdot \color{magenta}{ \left( \frac{ 2 x - 5 }{ 20 } \right) } + \left( \frac{ - x + 5 }{ 20 } \right) $$ $$ \left( 10 x^{2} - 35 x + 33 \right) = \left( \frac{ - x + 5 }{ 20 } \right) \cdot \color{magenta}{ \left( - 200 x - 300 \right) } + \left( 108 \right) $$ $$ \left( \frac{ - x + 5 }{ 20 } \right) = \left( 108 \right) \cdot \color{magenta}{ \left( \frac{ - x + 5 }{ 2160 } \right) } + \left( 0 \right) $$ $$ \frac{ 0}{1} $$ $$ \frac{ 1}{0} $$ $$ \color{magenta}{ \left( x^{2} + x + 4 \right) } \Longrightarrow \Longrightarrow \frac{ \left( x^{2} + x + 4 \right) }{ \left( 1 \right) } $$ $$ \color{magenta}{ \left( \frac{ 2 x - 5 }{ 20 } \right) } \Longrightarrow \Longrightarrow \frac{ \left( \frac{ 2 x^{3} - 3 x^{2} + 3 x }{ 20 } \right) }{ \left( \frac{ 2 x - 5 }{ 20 } \right) } $$ $$ \color{magenta}{ \left( - 200 x - 300 \right) } \Longrightarrow \Longrightarrow \frac{ \left( - 20 x^{4} + 16 x^{2} - 44 x + 4 \right) }{ \left( - 20 x^{2} + 20 x + 76 \right) } $$ $$ \color{magenta}{ \left( \frac{ - x + 5 }{ 2160 } \right) } \Longrightarrow \Longrightarrow \frac{ \left( \frac{ x^{5} - 5 x^{4} + 10 x^{3} - 10 x^{2} + 5 x + 1 }{ 108 } \right) }{ \left( \frac{ x^{3} - 6 x^{2} + 12 x - 8 }{ 108 } \right) } $$ $$ \left( x^{5} - 5 x^{4} + 10 x^{3} - 10 x^{2} + 5 x + 1 \right) \left( \frac{ - 5 x^{2} + 5 x + 19 }{ 27 } \right) - \left( x^{3} - 6 x^{2} + 12 x - 8 \right) \left( \frac{ - 5 x^{4} + 4 x^{2} - 11 x + 1 }{ 27 } \right) = \left( 1 \right) $$

0
On

The proof I know of this follows from a bunch of general theory about finitely-generated modules over principal ideal domains. If that means something to you, definitely check it out. Otherwise, I will try to boil out the important points. You do need to know something about polynomial divisibility though, there is no getting around that.

Let $T: V \to V$ be a linear transformation of the finite-dimensional vector space $V$, and $k[x]$ the set of polynomials over the same field $k$. Given a polynomial $p \in k[x]$, we can create the linear map $p(T)$ in the usual way.

Given a polynomial $p \in k[x]$ we define $V_p := \{v \in V \mid p(T)v = 0\}$, the subspace of $V$ of vectors killed by $p$. For example, if $p(x) = (x - \lambda)$, then $V_p$ is the $\lambda$-eigenspace of $T$. If $p(x) = (x - \lambda)^n$ for $n$ large enough, then $V_p$ will be the generalised eigenspace for $\lambda$.

So we can re-state your problem: if $p(x) = (x - \lambda)^n$ and $q(x) = (x - \mu)^m$ for some $n, m \geq 1$ and $\lambda \neq \mu$, then why is $V_p \cap V_q = 0$? The polynomials $p, q$ are coprime, i.e. they share no factors. For any two coprime polynomials $p, q$, there exist two other polynomials $r, s$ such that $rp + sq = 1$. Plugging $T$ into this identity, we get $r(T)p(T) + s(T)q(T) = \mathrm{id}_V$. Now suppose $v \in V_p \cap V_q$, then by the above identity we have $$ v = r(T)p(T)v + s(T)q(T)v = 0$$

The only non-obvious part in the above is the assertion that the polynomials $r, s$ exist. You might have seen a proof of this before in the case of integers, where the extended Euclidean algorithm is used to create the integers $r, s$. The proof for polynomials is exactly the same, and the algorithm still works. But perhaps you can learn about that from somewhere else.