If $T^rv=v$ and $v,Tv,\ldots,T^{r-1}v$ are linearly independent, every $r$th root of unity is an eigenvalue

75 Views Asked by At

This is a self-answered question, which I think is a cute reference. Alternative answers are welcome, of course.

Let $V$ be a finite-dimensional vector space over a field $F$, and let $T:V \to V$ be a linear map.

Suppose there exists a vector $v \in V$ such that $T^rv=v$, and $v,Tv,\ldots,T^{r-1}v$ are linearly independent. (In particular, this implies that $r$ is minimal, i.e. $T^kv \neq v$ for $0<k<r$).

Then every $\lambda \in F$ satisfying $\lambda^r=1$ is an eigenvalue of $T$.

How to prove this claim?


Comment: This claim is false without the independence assumption. Take e.g. $T=-\operatorname{Id}$, and any $v \neq V$. Then $T^2v=v$, but $1$ is not an eigenvalue of $T$.

4

There are 4 best solutions below

2
On BEST ANSWER

This follows from standard facts. From the given properties, $X^r-1$ is the minimal polynomial of the restriction of$~T$ to the minimal $T$-stable subspace$~W$ of$~V$ containing$~v$, the one spanned by $\{\,T^kv\mid k\in\Bbb N\,\}$. This minimal polynomial divides the minimal polynomial $\mu_T$ of (unrestricted)$~T$, so every root of $X^r-1$ is a root of $\mu_T$. Every root (in$~F$) of $\mu_T$ is an eigenvalue of$~T$. (Or you could avoid looking at the unrestricted$~T$ altogether: every root$~\lambda$ of $X^r-1$ is an eigenvalue of the restriction $T|_W$; in particular there are eigenvectors for$~\lambda$ of $T|_W$, and therefore of $T$, inside$~W$.)

For the record, the fact that every root$~\lambda$ of $\mu_T$ is an eigenvalue has a very explicit proof: write $\mu_T=(X-\lambda)Q$, then $Q[T]\neq0$ (by minimality of$~\mu_T$) and the image of $Q[T]$ is contained in the eigenspace $\ker((X-\lambda)[T])$ for$~\lambda$. In the example $Q=\sum_{i=0}^{r-1}\lambda^{r-1-i}X^i$.

0
On

Define $w=\sum_{i=0}^{r-1} \lambda^{-i}T^iv$.

$w \neq 0$ by our independence assumption, and direct computation gives (using $T^rv=v$) $Tw=\lambda w$:

$$ Tw=\sum_{i=0}^{r-1} \lambda^{-i}T^{i+1}v $$

$$ \lambda w=\sum_{i=0}^{r-1} \lambda^{-(i-1)}T^iv. $$


We can also reduce the argument to the case $\lambda=1$:

Write $S:=\lambda^{-1}T$. Then $S^r v=v$. Setting $w=\sum_{i=0}^{r-1} S^iv$, we immediately get $Sw=w \neq 0$, i.e. $\lambda^{-1}Tw=w$ which is equivalent to $Tw=\lambda w$.

2
On

This is very similar to your answer. Assume $I,T,\dots,T^{r-1}$ are linearly independent (no other assumption yet).

Let $\theta$ be nonzero. Of course,

$$ (\theta^{-r} T^r -I) = (\theta^{-1} T-I)(I+ (\theta^{-1}T) + \dots + (\theta^{-1}T)^{r-1}).$$

Now suppose $v$ is an eigenvector for $T$ corresponding to the eigenvalue $\theta^{r}$. Apply both sides to $v$. The lefthand side is zero and the righthand side is $(\theta^{-1} T -I)w$ where $w=(I+ (\theta^{-1}T) + \dots + (\theta^{-1}T)^{r-1})v$. By assumption $w$ is nonzero, hence it is an eigenvector for $T$ corresponding to the eigenvalue $\theta$.

0
On

I am elaborating and adding some comments and general perspective, based on Marc van Leeuwen's great answer:

We define $W=\operatorname{span}\{\,T^kv\mid 0\le k\in\Bbb Z\,\}$. Note that $\dim V < \infty$ implies $T(W) \subseteq W$ (since for some $k$, $T^kv$ is a linear combination of its predecessors $T^iv$ for $i<k$).

The assumption $T^rv=v$ implies $W=\operatorname{span}\{v,Tv,\ldots,T^{r-1}v\}$. Write $S:=T|_{W}:W \to W$.

We have the following observations:

At the moment we do not assume linear independence of the $T^iv$ for $i<r$.

  1. First, every eigenvalue of $S$ is an $r$th root of unity. Indeed, the assumption $S^rv=v$ implies $S^r(S^kv)=S^k(S^rv)=S^kv$, so $S^r$ acts as the identity on the spanning set $\{v,Sv,\ldots,S^{r-1}v\}$ of $W$, thus $S^r=\operatorname{Id}=\operatorname{Id}_W$. But if $\lambda$ is an eigenvalue of $S$, then $\lambda^r$ is an eigenvalue of $S^r=\operatorname{Id}$, so $\lambda^r=1$ as required.

  2. If we do assume $F$ is algebraically closed, then $S$ may not have posses any eigenvalue, e.g. rotation by $\pi/2$ in the plane. If $F$ is algebraically closed, then of course $S$ admits at least one eigenvalue, which must be a root of unity by the previous observation. Even in such cases $S$ may not have all the $r$th roots of unity $F$ as eigenvalues. Indeed, $S=-\operatorname{Id}$ and $r=2$ does not have the eigenvalue $1$. There can be dimensional reasons as well: If $S$ rotation by $\pi/2$, then $r=4$, but $S$ have at most two eigenvalues, even over $C$. (when considering $S:\mathbb{C}^2 \to \mathbb{C}^2$. SO if $r>\dim V$ of course we do not get all the $r$th roots of unity as eigenvalues.

  3. Finally, we now use the independence assumption of the $v,Sv,\ldots,S^{r-1}v$:

We saw that $S^r=\operatorname{Id}$, i.e. $p(S)=0$, where $p(X)=X^r-1$. Thus, the minimal polynomial $m$ of $S$ divides $p$. We claim that the independence assumption implies in fact that $m=p$:

If $m \neq p$, then $\deg m < \deg p=r$, since if the degrees are equal $m-p$ is annihilates $S$ and has lower degree than $m$, which contradicts its minimality. Thus $m(x)=\sum_{i=0}^{r-1}a_ix^i$. $m(S)=0$ implies $\sum_{i=0}^{r-1}a_iS^iv=0$, so $a_i=0$, so $m=0$, which is a contradiction.

Since every root of the minimal polynomial is an eigenvalue, this shows every $r$th root of unity is an eigenvalue.