Newton's method as a mapping of root finding problems to fixed point problems

323 Views Asked by At

I'm trying to understand the relative priority of viewing Newton's method as a fixed point iteration scheme or a root finding iteration scheme.

As a concrete example, suppose we want to compute $x^2 = y$. One way of doing this would be to look for the roots of $f(x) := x^2 - y$ (with something like $x^{k+1} = x^k - f(x^k)/f'(x^k)$. Alternatively, given $f$, we could also look for a fixed point iteration scheme for $g(x) := x - f(x)$ s.t. $g(x) = x$ (whose solution would again give $f(x) = 0$ and $x^2 = y$).

Why do we use one over the other and what are the differences in solving the root finding problem vs the fixed-point problem? Furthermore, why is it preferred to look at $g(x) = x - f(x)$ vs $g(x) = x + f(x)$ here, and is there something more general behind this?

1

There are 1 best solutions below

0
On

In general there are many fixed point iterations for a given root finding problem; all you need is an arbitrary function $h$ which shares the desired root of $f$ and then a fixed point of $g(x)=x+h(x)$ is a root of h and thus a root of f. But we want the iteration to converge, which requires some contractivity behavior for $g$. Newton's method for a twice differentiable function obtains this contractivity near the root by choosing $h$ such that $h'(r)=-1$ where $r$ is the root. Working with $g(x)=x+f(x)$ or $x-f(x)$ has no such guarantee.