Under what conditions do iterations by $\mathbf{x}_{new}=A\mathbf{x}+\mathbf{b}$ converge (to $(I-A)^{-1}\mathbf{b}$)?
Would it be correct to say that convergence is guaranteed if all the eigenvalues of $A$ have magnitude under 1.0?
Secondly, if we suppose that the fixed-point iterations do not converge, and that the diagonal entries of $A$ are all zero, how can we modify the update rule to make the iterations converge (still to $(I-A)^{-1}\mathbf{b}$)?
I imagine iterating instead with $\mathbf{x}_{new}=\frac{1}{s}(A\mathbf{x}+\mathbf{b})+(1-\frac{1}{s})\mathbf{x}=\frac{1}{s}((A-(1-s)I)\mathbf{x}+\mathbf{b})$ but I don't know whether to relate $s$ to the largest eigenvalue of $A$ or to something else, to guarantee convergence.