I would like to prove that the following algorithm to find a similar matrix under reduced form, that is a triangular one (it includes Jordan normal form), works but there is one step putting me in trouble.
For the seek of completeness I give all the necessary context to understand my question.
We are working on $E$ a finite dimensional vector space over an algebraically closed field. We consider $f$ an endomorphism on $E$.
The algorithm proceeds as follows:
For each eigenvalue $\lambda_k$, we start by finding a basis $(v_1,…, v_p)$ of the eigenspace $F(\lambda_k)$. In many cases this is not sufficient (algebraic multiplicity of the eigenvalue $\lambda_k$ greater than the dimension of $F(\lambda_k)$). Instead, we will « complete » the basis $(v_1,…, v_p)$ in order to find a basis of the spectral eigenspace of $\lambda_k$ denote $S(\lambda_k)$ (definition given below) which contains $F(\lambda_k)$.
For this, we take a linear combination of the basis (denote $b_{p+1}$) and seek for $w_{p+1}$ satisfying
$$ (f-\lambda_k I)(w_{p+1}) = \sum_{i=1}^{p}\alpha_i v_i = b_{p+1} $$
Then we add $w_{p+1}$, which is linearly independent of the basis, to the basis $(v_1,…,v_p)$ and we continue by taking a linear combination of $(v_1,…,v_p,w_{p+1})$ (denote $b_{p+2}$) such that it is linearly independant of $b_{p+1}$ and we seek for a solution $w_{p+2}$ of
$$ (f-\lambda_k I)(w_{p+2}) = b_{p+2} $$
And we follow this procedure until we get a family of linearly independent vectors $(v_1,…,v_p,w_{p+1},…,w_{p+k})$ with cardinality equal to the algebraic multiplicity of the eigenvalue $\lambda_k$.
By doing this, we determine a basis of $S(\lambda_k)$ since a well known theorem states that (for finite dimensional vector space over an algebraically closed field) the dimension of the space $S(\lambda_k)$ is equal to the algebraic multiplicity of the eigenvalue $\lambda_k$.
My question is about the sentence in bold: I am not able to find the argument proving that the vectors $w_{p+1}$ that we determine by using this procedure are linearly independent of the family $(v_1,….,v_p)$. Can you give a hint to help me prove this please?
Thank you a lot!
Some definitions:
1- $v$ ($\neq 0_{E}$) is a spectral eigenvector for the eigenvalue $\lambda$ if there exists $n\geq 1$ such that $(f-\lambda I)^n(v) = 0_{E}$ where $f^n$ denote the composition, $n$ times, of $f$ with itself with the convention that $f^0= I$ where $I$ is the identity mapping.
2- $S(\lambda)$ is the spectral eigenspace associated to $\lambda$, it is the set of all the spectral eigenvectors of $\lambda$.
This contains possibly lot of mistakes sorry.
The linear combination of the basis $(v_1,…,v_p)$ (denote $b_{p+1}$) has to be taken such that it is a non zero linear combination.
Then for the next step : the linear combination $b_i$ of $(v_1,…,v_p,.., w_{i-1})$ ($i\geq p+2$) should be such that they are linearly independant of the previous $b_k$ ($p+1\leq k<i$).
It is possible because at each step, $b_{i+1}$ is a linear combination of $(v_1,…,v_p,…w_i)$ while $b_{i}$ is a linear combination of $(v_1,…,v_p,…w_{i-1})$
Seeing that I have some difficulty to propose such procedure to determine the reduced form of a similar matrix using the result I have stated about the spectral eigenspace. If you have some recommendations on how to perform such a task it would help me a lot. Thank you.
This isn't an answer, but it was too long for a comment. Im not sure I follow what you are doing. You write "For this, we take a linear combination of the basis (denote +1) " but it is not clear what linear combination you are taking, and how do you know that there will exist a solution $w_{p+1}$ (the $f-\lambda_k$ has less than full rank) . But in any case, if the $w_{p+1}$ you find is some linear combination of the $v_1, \ldots, v_p$, say $$w_{p+1}=\sum c_iv_i$$ then apply $f-\lambda_k I$ to both sides of this to get $$(f-\lambda_kI)w_{p+1} = \sum c_i(f-\lambda_k)v_i =0$$ since the $v_i$ are eigenvectors with eigenvalue $\lambda_k$.
Basically, I think you may want to look at the spaces $$V_j(\lambda_k)=\{ v \mid (f-\lambda_kI)^jv=0\}$$ so $V_1 \subseteq V_2\subseteq ...$.
But I'm not sure how all this related to reduced echelon form, which is about row operations, and this eigenstuff is Jordan decomposition ... but I just being ignorant.