Finding the correct step-size in Richardson Iteration

447 Views Asked by At

I am reading the lecture notes of Daniel Spielman on the Conjugate Gradient Descent method (Link to the lecture notes) and he proves therein that the optimal step size for $\alpha$ is given by $$ \frac{2}{\lambda_1 + \lambda_n}.$$ I do not understand how this is derived. In detail: Let $0 < \lambda_1 \leq \ldots \leq \lambda_n$ be the eigenvalues of the psd matrix $A$. Then the stepsize $\alpha$ is given by the minimum w.r.t. $\alpha$ of $$\max_i | 1- \alpha \lambda_i| = |\max(1-\alpha \lambda_1, 1- \alpha \lambda_n) |.$$ I was wondering how I can calculate this solution?

1

There are 1 best solutions below

0
On BEST ANSWER

Note that to minimize $\max\limits_{i}\left|1-\alpha\lambda_i\right|$ with respect to $\alpha$, we want the extreme values bounding it, namely $\left|1-\alpha\lambda_1\right|$ and $\left|1-\alpha\lambda_n\right|$, to be equal.

Hence, set $\left|1-\alpha\lambda_1\right| = \left|1-\alpha\lambda_n\right|$. Recall that $\alpha > 0$ and $\lambda_1 < \lambda_n$. Given these constraints, the only way we can obtain equality is when $1 - \alpha\lambda_n < 0$ and $1-\alpha\lambda_1 > 0$ and vice versa. Thus, we have

$$ 1-\alpha\lambda_1 = -(1-\alpha\lambda_n) \implies \alpha = \frac{2}{\lambda_1 + \lambda_n} $$