Question about a line of reasoning with cyclic subspaces

168 Views Asked by At

The following reasoning was used in a proof of the cyclic decomposition theorem by Hoffman and Kunze (for reference, part 3 of the proof, page 235).

Suppose $T$ is a linear transformation, and I have a $T$-admissible subspace $W$. We have some vector $v$ not in $W$, and we define $$U = W + Z(v; T)$$ where $Z(v; T)$ is the cyclic subspace of all vectors $f(T)v$ for polynomials $f$ in the base field.

The idea is that we want to construct another vector $y$ whose $T$-conductor $q$ into $W$ is the same as the $T$-conductor for $v$, but we have that $q(T)y = 0$ so that we have $$W \cap Z(y; T) = 0.$$ If we use $q$ as above, there is some vector $w \in W$ such that $q(T)v = q(T)w$. Letting $y = v - w$, one can see fairly easy that $q(T)y = 0$, and since $y - v$ is in $W$ we have that $f(T)y$ is in $W$ if and only if $f(T)v$ is, so their $T$-conductor into $W$ is the same.

What confuses me is this. The authors immediately conclude that $$U = W \oplus Z(y; T),$$ where $U$ is as defined earlier. I'm not exactly sure how this follows from what's been deduced before. I think it suffices to show that $Z(v; T) = Z(y; T)$, but I have no idea how I would show that either.

2

There are 2 best solutions below

2
On BEST ANSWER

In general, $Z(v;T)$ will not equal $Z(y;T)$; luckily, you do not need the equality to hold. (If it did hold, then you would be able to just take $y=v$ always, which would make this whole development unnecessary).

Note that $W\leq U$; and by the definition of $y$, since $v,w\in U$, then $y\in U$. Moreover, note that $U$ is $T$-admissible (also known as $T$-invariant): given any vector $x\in U$, we have $T(x)\in U$. Thus, since $y\in U$, then $Z(y;T)\leq U$.

Thus, $W$ and $Z(y;T)$ are both subspaces of $U$, and hence $W+Z(y;T)\leq U$. Moreover, we know that $W\cap Z(y;T)=\{\mathbf{0}\}$, hence $W+Z(y;T)= W\oplus Z(y;T)$. The only question that remains is why we have that this direct sum is all of $U$.

The answer is that $\dim(U) = \dim(W)+\deg(q)$: since $q$ is the $T$-conductor of $v$ into $W$, if $\deg(q)=n$, then $v$, $T(v),\ldots,T^{n-1}(v)$ are linearly independent ($\deg(q)$ must be less than or equal to $\dim(Z(v;T))$), and are linearly independent from $W$. But as soon as you add $T^n(v)$, you get a linearly dependent set (since you can get a linear combination of $v,\ldots,T^{n-1}(v)$ equal to a vector in $W$). On the other hand, any element of $Z(v;T)$ can be expressed using $W$ and $v,\ldots,T^{n-1}(v)$, since given a polynomial $p(t)$, we can write $p(t) = s(t)q(t) + r(t)$ with $r(t)=0$ or $\deg(r)\lt n$, and hence $p(T)(v)= s(T)(q(T)(v)) + r(T)(v)$; but $q(T)(v)\in W$, hence $s(T)(q(T)(v))\in W$, and $r(T)(v)\in\mathrm{span}(v,T(v),\ldots,T^{n-1}(v))$. Thus, if you take a basis for $W$ and adjoin the elements $v,T(v),\ldots,T^{n-1}(v)$, you get a basis for $U$.

Now, since $q(y)=\mathbf{0}$, but $q$ is the $T$-conductor of $y$ into $W$, we also get that $\dim(Z(y;T)) = \deg(q)$. Thus, we have $$\dim(W\oplus Z(y;T)) = \dim(W)+\dim(Z(y;T)) = \dim(W)+\deg(q) = \dim(U)$$ and hence we get $W\oplus Z(y;T) = U$, as claimed.

0
On

When I was reading the same page, I had the same question. I found that I can use equation (7-10) $\alpha_k = \beta_k - \gamma_0 - \sum_{1\leq i<k}h_i \beta_i$ to get the result. The key is to express $\alpha_k$ as some combinations of $\beta_i, i\leq k$ and $\beta_k$ as some combinations of $\alpha_i, i\leq k$ (by expanding (7-10) recursively).

Claim: $W_0 + Z(\alpha_1 ;T) + \cdots + Z(\alpha_k ;T) = W_0 + Z(\beta_1 ;T) + \cdots + Z(\beta_k ;T)$

Suppose $w_0+g_1\alpha_1+\cdots+g_k\alpha_k \in LHS$

By (7-10), this equals $w_0+(g_1\beta_1-g_1\gamma_0)+\cdots+(g_k\beta_k-g_k\gamma_0-\sum_{1\leq i<k}g_k h_i \beta_i)$

$= (w_0-g_1\gamma_0-\cdots-g_k\gamma_0)+(g_1\beta_1-\cdots-g_kh_1\beta_1)+\cdots+(g_{k-1}\beta_{k-1}-g_kh_{k-1}\beta_{k-1})+g_k\beta_k \in RHS$

The other direction:

Suppose $w_0+g_1\beta_1+\cdots+g_k\beta_k \in RHS$

By (7-10), this equals $w_0+(g_1\alpha_1+g_1\gamma_0)+\cdots+(g_k\alpha_k+g_k\gamma_0+\sum_{1\leq i<k}g_kh_i\beta_i)$

$=(w_0+g_1\gamma_0+\cdots+g_k\gamma_0)+(g_1\alpha_1+\cdots+g_k h_1 \beta_1)+\cdots+(g_{k-1}\alpha_{k-1}+g_k h_{k-1}\beta_{k-1})+g_k\alpha_k$

Note that $\beta_1 = \alpha_1 + \gamma_0$,

$\beta_2 = \alpha_2+\gamma_0+h_1\beta_1 = \alpha_2 + \gamma_0 + h_1(\alpha_1+\gamma_0)$,

$\beta_3 = \alpha_3 + \gamma_0 + h_1\beta_1 + h_2\beta_2 = \alpha_3+\gamma_0+h_1(\alpha_1+\gamma_0)+h_2(\alpha_2 + \gamma_0 + h_1(\alpha_1+\gamma_0))$, ...

That is, each $\beta_i$ can be expressed as some combinations of $\alpha_1, \alpha_2, \ldots \alpha_i$.

Say $\beta_i = f_{i_0}\gamma_0 + f_{i_1}\alpha_1 + f_{i_2}\alpha_2+\cdots+f_{i_i}\alpha_i$ for some polynomials $f_{i_0}, f_{i_1}, f_{i_2}, ..., f_{i_i}$.

Then, that equals $(w_0+g_1\gamma_0+\cdots+g_k\gamma_0)+(g_1\alpha_1+\cdots+g_k h_1(f_{1_0}\gamma_0+f_{1_1}\alpha_1))+\cdots+(g_{k-1}\alpha_{k-1} + g_k h_{k-1}(f_{{k-1}_0}\gamma_0+\sum_{1\leq i\leq k-1}f_{{k-1}_i}\alpha_i)) + g_k\alpha_k$

$=(w_0 + g_1\gamma_0 + \cdots + g_k\gamma_0 + g_k h_1 f_{1_0}\gamma_0+\cdots+g_k h_{k-1}f_{{k-1}_0}\gamma_0)+(g_1\alpha_1+\cdots+g_k h_1 f_{1_1}\alpha_1+\cdots+g_k h_{k-1}f_{{k-1}_1}\alpha_1) + \cdots + (g_{k-1}\alpha_{k-1}+g_k h_{k-1} f_{{k-1}_{k-1}}\alpha_{k-1})+g_k\alpha_k \in LHS$.