Does CR for positive definite matrices have the same convergence theorem as CG?

81 Views Asked by At

The conjugate residual (CR) algorithm (specifically for symmetric positive definite matrices) differs from the conjugate gradient (CG) algorithm in a very slight way. I am using this as a reference. On page 6, the CG algorithm is described.

CG uses $$\alpha_k=\frac{r_k^Tr_k}{p_k^TAp_k}$$ and $$p_{k+1}=r_{k+1}-\frac{r_k^Tr_k}{r_{k+1}^Tr_{k+1}}p_k$$ while CR uses $$\alpha_k=\frac{r_k^TAr_k}{p_k^TA^2p_k}$$ and $$p_{k+1}=r_{k+1}-\frac{r_k^TAr_k}{r_{k+1}^TAr_{k+1}}p_k.$$

Clearly these algorithms are almost identical despite the slight change of how we pick a step size (this is related to what norm of vectors we use as CG uses 2-norm and CR uses matrix $A$-norm).

On page 8 Theorem 4.4 describes the convergence bound for CG algorithm. The proof of this theorem, on page 9, uses Chebyshev polynomials to find an upper bound for the error of residuals. My question is, should CR have the exact same upper bound as given in the convergence theorem for CG since this bound only relied on the Chebyshev polynomials which would be the same for both CR and CG?

1

There are 1 best solutions below

0
On

In exact arithmetic, conjugate residual and MINRES both minimize the 2-norm of the residual over Krylov subspace; see for instance Theorem 2 in the original CR paper. Thus, similar bounds as to the bound you provide for CG can be derived using Chebyshev polynomials. However, you need to be careful what norm you are working in as the bounds will not apply to the A-norm of the error, but rather to the 2-norm of the residual.