I'm asked to find if the fixed-point iteration
$$x_{k+1} = g(x_k)$$
converges for the fixed points of the function $$g(x) = x^2 + \frac{3}{16}$$ which I found to be $\frac{1}{4}$ and $\frac{3}{4}$.
In this short video by Wen Shen,
it's explained how to find these fixed-points and to see if a fixed-point iteration converges. My doubt is related to find if a fixed point iteration converges for a certain fixed point.
At more or less half of the video, she comes up with the following relation for the error
$$e_{k+1} = |g'(\alpha)| e_k$$
where $\alpha \in (x_k, r)$, by the mean value theorem, and because $g$ is continuous and differentiable.
If $|g'(\alpha)| < 1$, then the fixed-point iteration converges.
I think I agree with this last statement, but when she tries to see if the fixed point iteration converges for a certain root of a certain function, she simply finds the derivative of that function and plugs in it the root.
I don't understand why this is equivalent to $$e_{k+1} = |g'(\alpha)| e_k$$
Can someone fire up some light on my brain? (lol, can I say this?)
The point behind the derivative is that locally a function looks linear and the derivative is the multiplier. Symbolically: $$f(x) \approx f(x_0) + f'(x_0)(x-x_0)$$ for $x$ near $x_0$. The statement "looks like" can be made more precise with the Mean Value Theorem.
Now, try iterating a function of the form $g(x) = a + m(x-a)$. You'll find that it converges to its fixed point of $x=a$ when $|m|<1$.
We can prove this since $$|g(x)-a| = |a+m(x-a) - a| = |m|\cdot |x-a|$$ so that $$|g^n(x)-a| = |m|^n |x-a|.$$
This is why we say the fixed point $x_0$ is attractive if $|f'(x_0)|<1$ and repulsive if $|f'(x_0)|>1$.