Why do successive compositions of nilpotent operators strictly decrease their rank?

519 Views Asked by At

$\DeclareMathOperator{\N}{\mathcal{N}}\DeclareMathOperator{\Ker}{Ker}\DeclareMathOperator{\id}{Id}\DeclareMathOperator{\rk}{Rank}$It is important in my reading of a formal proof of the existence of Jordan Canonical Form to also rigorously show that the statement of the title is true. A step in this proof, key in proving the existence and also in constructing the JNF itself, assumes that, where $V$ is a finite-dimensional vector space and $A:V\to V$, for some $\lambda$ such that $A-\lambda\id$ is singular:

$$\N_m=\Ker(A-\lambda\id)^m\\\N_1\subsetneq\N_2\subsetneq\N_3\subsetneq\cdots\subsetneq\N_k=\N_{k+1}=\N_{k+2}=\cdots$$

The fact that $B=A-\lambda\id$ is singular implies $B$ is nilpotent (edit: no it doesn't!) - I believe some matrix is nilpotent iff. it is singular (edit: not true!). I understand the latter equality - the rank and nullity stabilise at some point $k$, and this is proven in the article I am reading - but the assumption that $\dim\N_{i+1}\geqslant1+\dim\N_i,\,i\lt k$ goes unproven, because it is "obvious", in the sense that I instinctively feel it is true. And it really is intuitive; the nilpotent operator should reduce its own rank successively until the rank becomes $0$ but all the same I cannot prove for any arbitrary nilpotent operator $B$ that $\rk(B^{m+1})\leqslant\rk(B^m)-1$.

I am happy to attempt this myself with hints, or see a full rigorous answer. Any observations are appreciated.

EDIT:

To be clear:

Very specifically it is not the notion that there exists $k$ such that the sequence stabilises that I care about - it is the notion of $\N_1\subsetneq\N_2\subsetneq\cdots\N_k$ that I ask for a proof of, where $\N_i$ is the nullspace of a nilpotent operator at the $i$th power.

  1. Take $A$ to be a nilpotent linear operator forming an endomorphism in a finite dimensional vector space $V$, and let $\N$ denote the nullity of some linear operator. Then: $$\rk(A)\gt\rk(A^2)\gt\cdots\gt\rk(A^k)=\rk(A^{k+1})=\cdots=0\\\N(A)\lt\N(A^2)\lt\cdots\lt\N(A^k)=\N(A^{k+1})=\cdots=\dim V$$

Why, and why is the sequence of ranks strictly decreasing only/not only for nilpotent operators?

And the question is interested mostly in the strict nature of this sequence - I know why the equalities at the end hold - it is the strict inequalities at the beginning that I am interested in.

4

There are 4 best solutions below

0
On BEST ANSWER

Let $A$ be a linear operator that is not zero. Let $N_k$, $k\geq 1$, be the nullspace of $A^k$. We trivially have that $N_k\subseteq N_{k+1}$ for all $k$.

Claim. If $k$ is such that $N_k=N_{k+1}$, then $N_k=N_{k+r}$ for all $r\geq 1$.

Proof. Induction on $r$. True by assumption for $r=1$. Assume that we know that $N_k=N_{k+r}$. We show that $N_{k+r}=N_{k+r+1}$. Let $v\in N_{k+r+1}$. Then $A^{k+r+1}v = \mathbf{0}$, so $Av\in N_{k+r}=N_k$. That means that $A^k(Av) = A^{k+1}v = \mathbf{0}$, so $v\in N_{k+1}\subseteq N_{k+r}$. Thus, $N_{k+r+1}\subseteq N_{k+r}$ which proves the desired equality. $\Box$

Note this is not restricted to nilpotent operators. This is true for any linear operators. The nullspaces will get larger under consecutive applications of the operator, until at one point they stabilize... and then that's where they stay, and never get any bigger. So if you have the same nullspace for $A^7$ and $A^8$, then that's it; that's the nullspace for all $A^k$ with $k\geq 7$.

Thus, if the sequence ever stabilizes at one step, then that's it. It will "stop" at that point and never get any bigger. For a nilpotent operator, until you get to $N_k=V$, each previous step must enlarge these nullspaces. So you have strict inclusion at each step until you get to $N_k=V$. (For arbitrary operators you'll get a strict inclusion until you get to the generalized eigenspace of $0$.)

3
On

First, it's not true that $B$ singular implies $B$ nilpotent. The former means that $0$ is an eigenvalue of $B$ (equivalently, $\lambda$ is an eigenvalue of $A$), and the latter means that all eigenvalues of $B$ are zero (equivalently, all eigenvalues of $A$ are $\lambda$). Think about the matrix $$\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}.$$ But the desired result $\mathcal N_1 \subsetneq \cdots \subsetneq \mathcal N_k = \mathcal N_{k+1} = \cdots$ is actually true for any linear transformation $B: V \to V$ with $V$ finite-dimensional. The only relevance of nilpotence/singularity here is that $B$ is nilpotent iff $\mathcal N_k = V$, and $B$ is singular iff $\mathcal N_k \neq 0$.

Hint for the proof: it suffices to prove that if $\mathcal N_i = \mathcal N_{i+1}$, then $\mathcal N_{i+1} = \mathcal N_{i+2}$. I guess you already know that $\mathcal N_{i+1} \subseteq \mathcal N_{i+2}$, so choose an element $v \in \mathcal N_{i+2}$ and use your assumption to prove it's in $\mathcal N_{i+1}$.

0
On

Let $V$ be an $n$-dimensional vector space over $\mathbb{F} = \mathbb{R}$ or $\mathbb{C}$. Let $S \colon V \to V$ be a linear map. We have the nested sequence of null spaces $$\{0\} = N(I) \subseteq N(S) \subseteq N(S^2) \subseteq \dots$$ Note that $N(S^j) \neq N(S^{j + 1}) \iff \dim N(S^j) < \dim N(S^{j + 1})$. Thus the sequence must stabilize at some point. Thus we can let $m$ be the smallest nonnegative integer such that $N(S^m) = N(S^{m + 1})$. Now if $N(S^j) = N(S^{j + 1})$, then $N(S^{j + 1}) = N(S^{j + 2})$ since $$S^{j + 2}v = 0 \implies S^{j + 1}Sv = 0 \implies S^{j}Sv = 0 \implies S^{j + 1}v = 0.$$ Thus by induction, $N(S^{m}) = N(S^{m + 1}) = N(S^{m + 2}) = \dots$. Note that $0 = \dim N(I) < \dim N(S) < \dots < \dim N(S^m)$ implies $m \leq n$ as the dimension cannot increase past $n$.

A similar proof shows that the sequence of ranges $V = R(I) \supseteq R(S) \supseteq R(S^2) \supseteq \dots$ stabilizes at some smallest nonnegative integer $p \leq n$.

To summarize, we have $$\{0\} = N(I) \subsetneq N(S) \subsetneq N(S^2) \subsetneq \dots \subsetneq N(S^m) = N(S^{m + 1}) = \dots$$ and $$V = R(I) \supsetneq R(S) \supsetneq R(S^2) \supsetneq \dots \supsetneq R(S^p) = R(S^{p + 1}) = \dots$$

Now specialize to the case where $S$ is nilpotent, so $S^q = 0$ for some $q \geq 0$. Since $S$ is nilpotent, the sequence of null spaces stabilizes to $N(S^q) = V$, and the sequence of ranges stabilizes to $R(S^q) = \{0\}$.

0
On

After modifications of the question, I'll talk about the following. Let $A$ by any linear operator on a finite dimensional vector space; we know that the sequence of subspaces $(N_k)_{k\in\Bbb N}$ where $N_k=\ker(A^k)$ is weakly increasing, and therefore given the finite dimension must ultimately stabilise. Why is it always strictly increasing (in dimension) up to the point where it ultimately stabilises?

One may show this by showing that whenever a pair of successive dimensions are equal, so is the next pair (overlapping with it in one term); an immediate induction will then show that the dimensions will then stay constant from there on, and this proves the strict increase up to reaching its ultimate dimension. So assume $N_k=N_{k+1}$; we want to show that $N_{k+1}=N_{k+2}$. Since we already saw that that $N_{k+1}\subseteq N_{k+2}$, it will suffice to show that $N_{k+2}\subseteq N_{k+1}$. So assume furthermore that $v$ is such that $v\in N_{k+2}$, in other words $A^{k+2}(v)=0$. Then $A^{k+1}(A(v))=0$ so $A(v)\in N_{k+1}=N_k$, which means that $0=A^k(A(v))=A^{k+1}(v)$, and $v\in N_{k+1}$, as desired.