I am trying to show that the following algorithm outputs the solution to the problem $Ax=b$.
Assumptions $A$ is symmetric positive definite of size $n \times n$ with $m$ distinct eigenvalues. The eigenvalues are known but their respective eigenvectors and the number of times each eigenvector is repeated is not known.
then
\begin{align} \text{for } &i = 0,\dots, m\\ &r_i = b - Ax_i\\ &x_{i+1} = x_i + \frac{1}{\lambda_i}r_i\\ \text{end} \end{align}
this algorithm should converge in $m$ steps
What I have done so far:
My aim has been to show that $||x-x_m|| = 0$ which in term will show that $x-x_m = 0$, which will complete our proof.
\begin{align} e_m &= x - x_m\\ &= x - x_{m-1} + x_m -x_{m-1}\\ &= e_{n-1} + \frac{1}{\lambda_{n-1}}r_{n-1}\\ & = e_0 + \sum_{i=1} ^{m-1} \frac{1}{\lambda_{i}}r_{i} \end{align}
At this point I tried taking the norm but that does not get me anywhere, I think I am missing something. I would appreciate some ideas and guidance.
We can express the initial error $e_0$ as a linear combination of eigenvectors $v_j$ corresponding to the eigenvalues $\lambda_j$: $$ e_0=\sum_{j=1}^mc_jv_j. $$ It does not matter that the eigenvalues can be repeated. There is always some eigenvector $v_j$ which we can take from the eigenspace of $\lambda_j$ to make the linear combination.
The initial residual $r_0$ is then $$ r_0=Ae_0=\sum_{j=1}^mc_j\lambda_jv_j. $$
Now for the error $e_1$, we have $$ e_1=e_0-\frac{1}{\lambda_1}r_0 =\sum_{j=1}^mc_jv_j-\frac{1}{\lambda_1}\sum_{j=1}^mc_j\lambda_jv_j =\sum_{j=2}^m c_j\frac{\lambda_j}{\lambda_1}v_j=\sum_{j=2}^m\tilde{c}_j v_j. $$ Note that the component corresponding to $v_1$ disappeared.
In this way, you can show that $e_i$ has no component in the eigenspaces corresponding to the eigenvalues $\lambda_1,\ldots,\lambda_i$ and hence $e_m$ is zero.