Finding convergence rate for Bisection, Newton, Secant Methods?

6.1k Views Asked by At

I have implemented the bisection, Newton, and secant methods for solving nonlinear equations in one dimension. I know my methods work to find at least one root, however how would I go about solving for the convergence rate for each method? For example if my nonlinear equation is $x^3 - 2x - 5 = 0$ and I use the bisection method, I get a root of $2.09455$, however I am not sure how I can use this to find the convergence rate.

2

There are 2 best solutions below

2
On BEST ANSWER

The convergence rate connects the error in one step with the error in another step by $$|x_{n+1} - x| \leqslant c |x_{n} - x|^p$$

So if $$\lim_{n→∞}\frac{|x_{n+1}-x|}{|x_n-x|^p}>0$$ you have convergence of order $p$.

Since you usually don't know the exact solution you can use the following formula:

$$ p\approx \frac{\log|\frac{x_{n+1}-x_n}{x_n-x_{n-1}}|}{\log|\frac{x_{n}-x_{n-1}}{x_{n-1}-x_{n-2}}|}$$

You should expect results around $1$ for the bisection method, increasing convergence up to $1.6$ for the secant method and increasing convergence up to $2$ for Newton's method.

2
On

The methods are said to converge with order $p$ if $$ e_{i+1} =c e_i^p, $$ where $e_i = |x_i - x^*|$ is the error in the approximation at the $i$th iteration, $x^*$ is the true root and $x_i$ is the approximation. Here, $c$ and $p$ are constants independent of $i$.

Thus, to measure the rate of convergence we need to store the error in the approximation at each iteration. Note that $$ \log{e_{i+1}} = \log{c} + p \log{e_i}, $$ i.e. plotting $\log{e_{i+1}}$ against $\log{e_i}$ should (asymptotically) result in a straight line with gradient $p$, which can be measured.

Alternatively, note that $$ \frac{e_{i+1}}{e_i} = \frac{ce_i^p}{ce_{i-1}^p} = \left(\frac{e_{i}}{e_{i-1}}\right)^p $$ such that $$ p = \log{\left( \frac{e_{i+1}}{e_i} \right)} \Big/ \log{\left( \frac{e_{i}}{e_{i-1}} \right)}, $$ which is easily calculated from the errors of three consecutive iterations.

Note that the convergence rate may differ for different functions. In particular, functions with double roots (e.g. $(x-1)^2$) tend to give slower convergence that simple roots. Other functions may show superconvergence for some functions (e.g. $\sin{x}$ near $x=0$ for Newton's method). It may be worth trying your code on a few different examples before you draw conclusions about your methods.