A basis for a T-cyclic subspace

1.4k Views Asked by At

I've been coming back to this proof for a few days and I can't convince myself of the fact that $T^{j}(v)$ is in $\beta = \{v, T(v), T^{2}(v), ..., T^{j-1}(v)\}$. I wonder if somebody could help me pin this down? The proof is as follows.


Theorem $5.22 .$ Let T be a linear operator on a finite-dimensional vector space $\mathrm{V},$ and let $\mathrm{W}$ denote the $\mathrm{T}$-cyclic subspace of $\mathrm{V}$ generated by a nonzero vector $v \in \mathrm{V} .$ Let $k=\operatorname{dim}(\mathrm{W}) .$ Then $\left\{v, \mathrm{T}(v), \mathrm{T}^{2}(v), \ldots, \mathrm{T}^{k-1}(v)\right\}$ is a basis for $\mathrm{W}$. Proof. (a) since $v \neq 0,$ the set $\{v\}$ is linearly independent. Let $j$ be the largest positive integer for which $$ \beta=\left\{v, \mathrm{T}(v), \ldots, \mathrm{T}^{j-1}(v)\right\} $$ is linearly independent. Such a $j$ must exist because $V$ is finite-dimensional. Let $\mathrm{Z}=\operatorname{span}(\beta) .$ Then $\beta$ is a basis for $\mathrm{Z} .$ Furthermore, $\mathrm{T}^{j}(v) \in \mathrm{Z}$ by the linear independence theorem. We use this information to show that $\mathrm{Z}$ is a $\mathrm{T}$ -invariant subspace of $V .$ Let $w \in Z .$ since $w$ is a linear combination of the vectors of $\beta,$ there exist scalars $b_{0}, b_{1}, \ldots, b_{j-1}$ such that $$ w=b_{0} v+b_{1} \mathrm{T}(v)+\cdots+b_{j-1} \mathrm{T}^{j-1}(v) $$ and hence $$ T(w)=b_{0} T(v)+b_{1} T^{2}(v)+\cdots+b_{j-1} T^{j}(v) $$ Thus $T(w)$ is a linear combination of vectors in $Z$, and hence belongs to $Z$. So $\mathrm{Z}$ is $\mathrm{T}$ -invariant. Furthermore, $v \in \mathrm{Z}$. By Exercise $11, \mathrm{W}$ is the smallest T-invariant subspace of $V$ that contains $v,$ so that $W \subseteq$ Z. Clearly, $Z \subseteq W$ and so we conclude that $\mathrm{Z}=\mathrm{W}$. It follows that $\beta$ is a basis for $\mathrm{W},$ and therefore $\operatorname{dim}(\mathrm{W})=j .$ Thus $j=k .$ This proves (a).


I definitely see it for the case when $j = \dim(V)$ ( i.e. the largest $j$ for which $\beta$ is linearly independent is the dimension of $V$.)

Then $T ^{j}(v)$ must be in $\beta$ because $\beta$ now spans the whole of $ V$ .

But in the case is that $j < dim(V)$, what exactly prevents $T ^{j}(v)$ from belonging to the span of those $k-j$ vectors which the span of $\beta$ doesn't reach? I think this is my problem. Is this implicitly taken care of by the assumption that $j$ is the largest integer for which $\beta$ is linearly independent?

Thank you very much in advance!

1

There are 1 best solutions below

0
On BEST ANSWER

By definition, $j$ is the largest positive integer for which

$$ \beta=\left\{v, \mathrm{T}(v), \ldots, \mathrm{T}^{j-1}(v)\right\} $$

is linearly independent. This means $\beta \cup \{T^j(v)\}$ has some linear dependencies. Combining this with the fact that $\beta$ is linearly independent implies that $T^j(v)$ can be written as a linear combination of $\beta$


This last fact is probably what's tripping you up: if $S =\{ v_1,\ldots, v_n\}$ is a set of linearly independent vectors, and $S \cup \{w\}$ is linearly dependent, then $w$ can be written as a linear combination of $S$.

Proof. By linear dependency, there are some $\alpha_i$'s and $\gamma$ (not all 0) such that

$$\alpha_1 v_1 + \cdots \alpha_n v_n + \gamma w = 0$$

If $\gamma$ were 0, $S$ would not be linearly independent. So $\gamma \neq 0$, meaning we can move $\gamma w$ to the other side and divide by $\gamma$ to get the desired linear dependency.