I've the following function
$$g(x) = x^2 + \frac{3}{16}$$
for which I found the two fixed points $x_1 = \frac{1}{4}$ and $x_2 = \frac{3}{4}$.
I noticed that the fixed-point iteration $$x_{k+1} = g(x_k)$$ actually converges for $g$ around $x_1$, but not around $x_2$, since I basically found that the absolute value of the derivative of $g$ is less than $1$ in the range $$A = \left(\frac{-1}{2}, \frac{1}{2} \right)$$, and I can find a sub-range of $A$ from where I can pick an initial guess for the fixed-point iteration method to find $x_1$. I can't, as I said above, do the same thing for $x_2$, because, first of all, it lies outside $A$.
Now, I need to find roughly how many iterations will be required to reduce the convergence error by a factor of $10$, but I'm not sure what it means and therefore how to find it.
I've looked around for explanations, but I'm still very confused.
Questions that may help you to help me
What's the convergence error?
By a factor of $10$ means that the convergence error is initial, say $e = x$, and we want to find the number of iterations required so that it is $e = \frac{x}{10}$? But still, this for me doesn't make much sense, since I don't know what's this convergence error, and how it is related to the fixed point iteration in general?
The convergence error is just the error between your current approximation $x_i$ and the root you are converging to, $\frac 14$, so it is $|x_i-\frac 14|$. When you are close to the root, the error will be multiplied by approximately $g'(\frac 14)=\frac 12$ each step. You can verify this experimentally with a spreadsheet. You can justify it theoretically by expanding $g(x)$ around the root in a Taylor series, so $x_{i+1}=g(x_i) \approx g'(\frac 14)(x_i-\frac 14)+\dots$ You are asked how many steps it takes to multiply the error by $\frac 1{10}$