I have a doubt regarding fixed point iteration method for $f(x)=0$ for a zero of $f$ in $[a,b]$. We write it in equivalent form like $$g(x)=x$$ Now I studied that result is that iteration Method $$x_{n+1}=g(x_n)$$ is convergent iff $|g’(x)|<c<1,\forall x\in[a,b]$ . I think I have read wrong result because I have a counter-example as
$$x_{n+1}=g(x_n)$$ where $$g(x)=1+\frac{2}{x}, x\in[1,100]$$ I check it by using general real analysis method that the sequence $<1+\frac{2}{x_n}>$ is convergent to $2$ for any choices of $x\in [1,100]$, but $$|g’(x)|<c<1$$ is not true for all $x$ in $[1,100]$. Where is my mistake in result ? Is it not if and only if form ? Please help . Thank you .
The condition that $|g'(x)|<1$ over an entire interval containing the fixed-point is a sufficient but unnecessary condition for convergence. Likewise the condition that $|g'(x)|>1$ over the same interval is a sufficient but unnecessary condition for divergence (unless one happens to "stumble" upon the fixed-point from outside the interval).
You can't always use these conditions to test for convergence/divergence. For example, $g$ may not be differentiable, or $|g'(x)|$ may swap between being less than or greater than $1$ over the interval of interest. The latter case is what occurs for your example.
Despite this, your example still manages to converge. This may be attributed to the fact that it satisfies $|g'(x)|<1$ for all $x\in(\sqrt2,\infty)$, and $g(x)\in(\sqrt2,\infty)$ for any $x\in(0,\sqrt2]$. Hence, after the first iteration, you are guaranteed to find yourself in a region where it converges.
This is a good example where one should be more interested in eventual convergence i.e. "will it converge if you start close enough to the fixed-point?" Assuming $g$ is continuously convergent, this boils down to whether or not $|g'(x_\star)|<1$ where $x_\star=g(x_\star)$ is the fixed-point. You can further extend this to allow $|g'(x_\star)|=1$ as long as $|g'(x)|<1$ holds almost everywhere near it, which is a sufficient but unnecessary condition for convergence is you start almost anywhere near it.
Using this sufficient but unnecessary condition for your example:
The fixed-point may be observed to be $2$, and $g'(2)=-1/2$. Hence $x_{n+1}=g(x_n)$ converges provided you start close enough to $2$.
Note that this does not tell you how close you will need to start. But it's an easier condition to verify in some cases.
Some other examples you can try:
$x_{n+1}=\arctan(x_n)$ converges to $x_\star=0$ for all real starting points.
$x_{n+1}=2^{\operatorname{sgn}(x_n)}x_n$ converges to $x_\star=0$ only for negative starting points, where $\operatorname{sgn}$ is the sign function.
$x_{n+1}=-2^{\operatorname{sgn}(x_n)}x_n$ converges to $x_\star=0$ only if your starting point is $0$.
For the last two, if we ignore the lack of differentiability at $x=0$, what is their difference? Is it sufficient in either case to look only at the magnitude of the derivatives?