In the paper Ernest K. Yyu, Stephen Noyd - A Primer on Monotone Operator Methods - Survey, the authors frame Iterative Refinement (of an approximate solution to a linear system $Ax=b$) in the context of monotone operators, Resolvents, and Cayley operators. They require $A+A^T \succeq 0$ to ensure that the operator $F(x) = b-Ax$ is maximal so that applying the resolvent of $F$ defined as $R_F = (I + \frac{1}{\epsilon}F)^{-1}$ gives a proximal point method where the iteration is set as:
$$ r_k = b-Ax_k, \qquad x_{k+1} = x_k + (\epsilon I + A)^{-1} r_k $$
I would have thought we'd use $(I + \epsilon A)^{-1}$ as the Resolvent for the proximal point method; how do the authors get $(\epsilon I + A)^{-1}$ from $(I + \frac{1}{\epsilon}F)^{-1}$? Am I missing a matrix inverse identity? Why are proximal point methods important?
Finally, it obtains that $x_{k+1} = x_k + (x_{k+1}-x_k)=x_k - (\epsilon I + A)^{-1} \epsilon \frac{1}{\epsilon}(Ax_k -b)$. It corresponds to the iterative refinement algorithm.