Finding roots by Fixed Point Iteration

2.2k Views Asked by At

How to know or how to find the root of the equation by Fixed Point Iteration? In FPI is there any definition/theorem of when root exists? Or is it correct that when x = g(x) then x is the root of an equation ? Thanks in advance!

2

There are 2 best solutions below

0
On

There is no good answer. Sometimes there are intuitive transformations that give a good fixed point method, like transforming $0=x^5-x^2-1$ to $$ x=g(x)=\sqrt[5]{1+x^2} $$

Sometimes you have to follow general schemes like the Newton method. But even that does not give a unique fixed point method and even the Newton methods for equivalent formulations of the problem need not have similar qualities (except for the extremely local quadratic convergence).

For instance, for the square root problem, the Newton methods starting from $$ f(x)=x^2-a,\ f(x)=x^{3/2}-ax^{-1/2},\ f(x)=1-ax^{-2} $$ give very different fixed point iterations.

0
On

There is a theorem called Banach Fixed point theorem which proves the convergence of a fixed point iteration.

Definition. Let (X, d) be a metric space. Then a map T : X → X is called a contraction mapping on X if there exists q ∈ [0, 1) such that

$d(T(x),T(y)) \le q d(x,y)$

for all x, y in X.

Banach Fixed Point Theorem. Let (X, d) be a non-empty complete metric space with a contraction mapping T : X → X. Then T admits a unique fixed-point x* in X $(i.e. T(x^*) = x^*)$. Furthermore, x* can be found as follows: start with an arbitrary element $x_0$ in X and define a sequence $\left\{ x_n \right\}$ by $x_n = T(x_n−1)$, then $x_n → x^*$.

One of the big challenges is to actually find a map T : X → X that satisfies this criteria. For 1D systems, Newton's method satisfies only for a small section of the real-line. It is for this reason you need to start very close to the solution to get to the answer.