How to calculate the approximate zero of any polynomial function without Newton's Method?

1.3k Views Asked by At

Newton's method is very interesting, however I am not always sure that I will get what I am looking for when I use it. For example, if I use it to get an approximation for $x^2 = 3$, This method will only work if the number I guess isn't at the vertex of the parabola.

And there are also other issues, for instance, I might get a "cycle".

Is there any other method which does what Newton's Method does, but which doesn't depend on me choosing an adequate point?


By the way, with regards to Newton's, Bisection and Brent's method, which is better? I mean, could you please give me the pros and cons of those methods and/or others alike?

5

There are 5 best solutions below

1
On BEST ANSWER

There's the Bisection Method, but it's slow.

0
On

You can always choose different starting points and run Newton's method, and if you converge to a solution you will know it because when you plug into the equation you will find that it solves the equation. If you're worried about getting into some large cycle, then you can always try the different starting points in parallel, i.e. go through all the current points one at a time and apply a Newton's step, and then repeat until you find a point that is approximately a solution.

2
On

Brent's method is a commonly used one: http://en.wikipedia.org/wiki/Brent%27s_method

From Wikipedia:

In numerical analysis, Brent's method is a complicated but popular root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less reliable methods.

0
On

You might want to read a little on "Continuation Methods". Very often you don't want a root at a single parameter value but rather you want to solve (f(x=xs;p)=0) to obtain xs(p). In this case as you vary p the number of roots can change and the branch that you are on can disappear etc. You can do this for "systems" and p need not be scalar either, but a set of parameters that are varied. There are many numerical codes for handling these problems, "Auto" (which I think was written by a guy named Eusebius Doedel) is on that comes to mind but it is not the only one.

0
On

Concerning the original question, it is known that the secant method is more reliable than Newton's method. It was furthermore shown that similar methods, such as Sidi's generalization of the secant method, using more than 2 points can be even more reliable.

Although I've yet to see formal proofs of the latter claim, there have been papers suggesting this may be true with numerical support. It is known, however, that if Newton's method converges with any initial point on an interval, then the secant method will also converge with initial points on that interval, but the converse is not true.

Note that the function must be sufficiently well behaved around the root where the initial guesses are chosen from. Methods such as the secant method just extend how far you can go with "bad" guesses, but cannot be guaranteed to always work.