How to find the root of a polynomial function closest to the initial guess?

1.2k Views Asked by At

I need some easy to implement and fast numerical method that finds the root of a nonlinear function (a polynomial in my case) closest to my initial guess. If I know that there is one root $x^\star_k\in I$ and I start in the middle of $I$, is there a numerical method that guarantees convergence to exactly this root?

Newton's method does not seem to assure that I stay within the interval since it may converge to a zero further away outside the interval, right? Can I modify Newton's Method $x_{n+1}=x_n - \frac{f(x_n)}{f'(x_n)}$ somehow to assure I stay within the interval?

Or, maybe it is not possible at all with Newton's Method but with the Bisection method instead, although at slower pace?

More precisely, I have a 5 to 10th degree polynomial and want to find all roots but know approximately where the roots are, that is, I divided the x-range into intervals $I_k\subseteq X$ that contain only one root $x^\star_k\in I_k$. I further have some regularity assumption on my function (no infinite derivatives etc).

Thanks in advance for our help.

3

There are 3 best solutions below

2
On

You can combine bisection and Newton method very efficiently. If you go to this place, on page 359, you will find subroutine RTSAFE (here and here) which does exactly what you want. It is very robust.

I apologize for giving you a reference to Fortran coding. If you can access the books of Numerical Recipes, you will find the equivalent in C and C++.

3
On

Regarding "easy to implement", I would recommend that you use a root-finding function written by an expert, rather than writing your own. That's certainly easier, and will probably lead to better results.

The algorithms in the "Numerical Recipes" books are typically pretty good, though they favor simplicity over optimality.

If you really want to write your own root-finder ...

Newton's method can be adapted to stay within a given interval that's known to contain a root. Look up "root bracketing".

Alternatively, use the Secant Method, instead of Newton's method. Its theoretical rate of convergence is almost as fast and it naturally preserves root bracketing.

For a fascinating analysis (and a glimpse at some of the difficulties involved), take a look at these notes by a leading expert.

It's also worth mentioning that there are specialized methods for finding roots of polynomials. One of the best approaches is to find the eigenvalues of the polynomial's companion matrix. Again, I would not recommend that you write your own software for finding eigenvalues; use proven existing code (e.g. Linpack, etc.) instead.

7
On

The middle between Newton's method and Bisection is the regula falsi or false position method. See wikipedia for details and anti-stalling methods that make this method only slightly slower as the secant method while retaining the convergence guarantees of a bracketed method.

from math import abs

def illinois(f,a,b, tol):
    '''regula falsi resp. false postion method with the Illinois anti-stalling variation'''
    # b is the (theoretically) best point, 
    # a the last computed point with opposite sign of f(a)
    fa = f(a)
    fb = f(b)
    if abs(fb)>abs(fa): a,fa,b,fb = b,fb,a,fa
    while(abs(b-a)>tol*(abs(a)+abs(b))):
        # c = (a*fb-b*fa)/(fb-fa) # this while symmetric may become numerically unstable
        c = b - fb * (b-a)/(fb-fa)
        fc = f(c)
        if fa*fc < 0: # interval end might stall away from root
            fa *= 0.5 # decrease weight of b, move midpoint towards a
        else: # [b, c] is the new bracketing interval 
            a = b; fa = fb

        b = c; fb = fc
    return b, fb

If the initial $[a,b]$ is not a proper bracketing interval, this will replicate the secant method until points with opposing sign are found. In this stage, points outside the initial interval may be encountered.