Iterative Refinement and Monotone Operators

184 Views Asked by At

In the paper Ernest K. Yyu, Stephen Noyd - A Primer on Monotone Operator Methods - Survey, the authors frame Iterative Refinement (of an approximate solution to a linear system $Ax=b$) in the context of monotone operators, Resolvents, and Cayley operators. They require $A+A^T \succeq 0$ to ensure that the operator $F(x) = b-Ax$ is maximal so that applying the resolvent of $F$ defined as $R_F = (I + \frac{1}{\epsilon}F)^{-1}$ gives a proximal point method where the iteration is set as:

$$ r_k = b-Ax_k, \qquad x_{k+1} = x_k + (\epsilon I + A)^{-1} r_k $$

I would have thought we'd use $(I + \epsilon A)^{-1}$ as the Resolvent for the proximal point method; how do the authors get $(\epsilon I + A)^{-1}$ from $(I + \frac{1}{\epsilon}F)^{-1}$? Am I missing a matrix inverse identity? Why are proximal point methods important?

1

There are 1 best solutions below

0
On
  1. If you go back to the definition of proximal point method in Section 6.2, you will see that the resolvent operator should be applied is $R_F$ instead of $R_A$
  2. Why are proximal point methods important? In the primer, it can be concluded that many algorithms can be derived by applying proximal point method into different operators. Iterative refinement algorithm is one example discussed in the primer.
  3. To show how to get iterative refinement algorithm by applying proximal point method to the resolvent operator, we can have: proximal point method applied to $R_F$ that $x_{k+1} = R_F(x_k)$. Then it follows that $$x_{k+1} + \frac{1}{\epsilon}(Ax_{k+1} -b) = x_k, x_{k} + \frac{1}{\epsilon}(Ax_{k} -b) = x_{k-1} $$ Next we can have $x_{k+1} - x_k = (\epsilon I + A)^{-1} \epsilon (x_k - x_{k-1})$

Finally, it obtains that $x_{k+1} = x_k + (x_{k+1}-x_k)=x_k - (\epsilon I + A)^{-1} \epsilon \frac{1}{\epsilon}(Ax_k -b)$. It corresponds to the iterative refinement algorithm.