Let $A$ be an $n×n$ complex matrix. Then there exists an invertible matrix $T$ such that $T^{−1}AT=J$ where $J$ is a Jordan form matrix having the eigenvalues of $A$. Equivalently, the columns of $T$ consist of a set of independent vectors $x_1, . . . , x_n$ such that $Ax_k=λ_kx_k$ , or $Ax_k=λ_kx_k+x_{k−1}$.
There exists an inductive proof by Filippov,
When $n=1$ the Jordan canonical form of the matrix $[a]$ is $[a]$ itself.
Assume the existence of a Jordan canonical form for all $r\times r$ matrices, $r=1,2,...,n-1$, ie. $A_{r\times r}$ is simlar to a Jordan matrix.
Consider an $n\times n$ matrix
Assume that $\lambda=0$ is an eigenvalue, ie., $A$ is singular. So the dimention of $C(A)$ is $r<n$.
Then it says that, by the induction hypothesis A (more precisely the linear operator associated with A) restricted by its range has a Jordan canonical form.
How do I make sense of this statement ?
I think that means,
thanks @ancientmathematician for pointing into the right direction.
If we think of another tansformation, $T:range(A)\to range(A)$ and the corresponding matrix is $B_{r\times r}$ associated with it, then from the induction hypothesis there exists a Jordan canonical basis $(w_1,\cdots,w_r)$ for the $range(A)$ such that $Bw_k=λ_kw_k$ , or $Bw_k=λ_kw_k+w_{k−1}$.
The book seems to use same $A$ for the $B_{r\times r}$, might be the source of confusion here !
Ok.
Even if that is the case, how can one advance the proof from here and find the conditions for the Jordan basis for the original matrix $A_{n\times n}$ ?
How do I get from $Bw_k=λ_kw_k$ , or $Bw_k=λ_kw_k+w_{k−1}$. to $Aw_k=λ_kw_k$ , or $Aw_k=λ_kw_k+w_{k−1}$ for the original $n\times n$ matrix $A$ ?
Please check A Primer of Abstract Mathematics By Robert B. Ash.
Thanks @ancientmathematician for the hint.
If we think of another transformation, $T:range(A)\to range(A)$ and the corresponding matrix is $B_{r\times r}$ associated with it, ie., $B_{r\times r}$ represents the same $range(A)\to range(A)$ transformations as $A_{n\times n}$, just the space becomes smaller,
then from the induction hypothesis there exists a Jordan canonical basis $(w'_1,\cdots,w'_r)$ for the $range(A)$ such that $Bw'_k=λ_kw'_k$ , or $Bw'_k=λ_kw'_k+w'_{k−1}$.
Step 1
This implies there exists a Jordan canonical basis $(w_1,\cdots,w_r)$ for the $range(A)$ such that $Aw_k=λ_kw_k$ , or $Aw_k=λ_kw_k+w_{k−1}$. The vectors $w_i$ and $w'_i$ are the same except that $w_i$ is a vector in the larger $\mathbb{C}^n$ space and $w'_i$ is the exact same vector represented in the smaller subspace $\mathbb{C}^n$.
Step 2
Let the subspace $N(A)\cap R(A)$ has dimension $p$. The subspace $N(A)\cap R(A)$ is the eigenspace corresponding to the eigenvalue $\lambda=0$. So among the basis vectors $(w_1,\cdots,w_r)$ there are $p$ linearly independent eigenvectors that has eigenvalue $\lambda=0$. Since $w_i\in R(A)$ we have $w_i=Ay_i$ for some $y_i$. Since $Ay_i=0y_i+w_i$ we can place each $y_i$ after each corresponding $w_i$ in the string for all $p$ members in $N(A)\cap R(A)$.
Step 3
Since $dim[N(A)]=n-r$ there are $n-r-p$ vectors $z_i\in N(A)$ that do not belong to $N(A)\cap R(A)$ and $Az_i=0$, ie., there are $n-r-p$ linearly independent eigenvector $z_i\in N(A)$ but not a member of $R(A)$.
Since we have $n$ vectors in $\mathbb{C}^n$ that are in the form $Ax_k=λ_kx_k$ , or $Ax_k=λ_kx_k+x_{k−1}$, we just have to verify the linear independence of $w_i$, $y_i$, $z_i$ to prove the existence of the Jordan canonical basis for any singular square matrix.