If I know that a polynomial (of order $k \gt 2$) has at most $1$ positive real root - can I find that easily?

266 Views Asked by At

[update 2] Urgghh - the time-consumption really stems only from the construction of the h-order polynomial. The time for finding the roots (only 10 to 20 times Newton-iteration because of my nice start-intervals) is then completely neglectable.
So the effectiveness-aspect of my question has been solved, as it depends on the false premise of a costly Newton-iteration.
Only the more qualitative question of the existence of some procedure for finding only the few real roots of a polynomial remains open (but may be have the answer to the negative).


I have a family of polynomials of increasing order (see my discussion at the earlier MSE-thread), and have the strong hypothese, that each of that polynomials has either 1 or 2 real roots, and always exactly one positive - is then there an algorithm to find just that one real root?

I looked at "polsturm" in Pari/GP which allows to determine the number of real roots, but not also their value - I still have to use "polroots" which computes all complex roots.

Well, I have also a hypothese of the rough value of that root, so I tried also the inbuilt solver with a reasonable small interval.

But with polynomials of order 200 both methods need fairly time, and with order of 1000 the solver needed about 10 minutes...
Perhaps there is some Horner-like scheme? So my question is:

Q: Is there an efficient algorithm for finding the real roots of a high.order polynomial if I know there are at most 2?


[update] Using the solver-method I get that time-consumption $$ \small \begin{array} {r|r} \text{order} & \text{msec} \\ \hline \\ 62 & 31 \\ 125 & 296 \\ 250 & 3292 \\ 500 & 39967 \\ 1000 & \gt 400000 \end{array}$$

This is the relevant part of the code in Pari/GP

glpc =[0]   \\ introduce a global accessible vector holding the coefficients
            \\ of the current polynomial

{evalpoly(x)= local(f,su);  \\ the Horner-scheme evaluator of the polynomial
    su=0;f=1;             \\ works on the global vector glpc[]
    for(k=1,#glpc,su+=f*glpc[k];f*=x);
    return(su)}

{mypolroot(h)= local(xl,xu,res);  \\ creates the polynomial of order h
                                 \\ finds the real root in the interval
                                 \\ depending on h by my hypothese
  glpc = polcoeffs (makepoly(h));  \\ put the coeffs of the h'th polynomial 
                                  \\ into glpc
   xl=floor(h*gm/2); xu = ceil( h*gm/2+5); \\ "magic" lower and upper bound
                                           \\ for the positive real root
   res = solve(x= xl,xu , evalpoly(x) );
   return(res);}
3

There are 3 best solutions below

8
On BEST ANSWER

You say you have a good guess for the location of the root -- you've tried Newton's method?

If Newton's method doesn't converge, i.e. your guess is too inaccurate, you can take advantage of the knowledge that there is only a single positive root to bracket the root inside the interval $[a,b]$, with $f(a)f(b) < 0$. Worst case you can use $a=0$ and find a $b$ by successive doubling. Then you can run the Ridders' method. This method is guaranteed to converge, and at worst linearly, though it converges much faster in practice once you're near the root.

EDIT: Also, I feel like I should mention that I've used the Jenkins-Traub algorithm (C code here) in my research projects in the past, as the first (and almost always last) stop for finding polynomial roots. But finding all roots seems overkill when you're only looking for one and can bracket it effectively.

1
On

The Chebfun system (see here) takes only a fraction of a second to calculate all the real roots of a polynomial whose degree in the thousands. The code is written in Matlab, so presumably it would be even faster if it were written in a compiled language.

The root-finding technique is based on computing the eigenvalues of companion matrices. However, a naive implementation gives an algorithm that is $O(n^3)$ in the degree of the polynomial. To improve matters, Chebfun recursively approximates high-degree polynomials by low degree ones. Here "low" means less than 100. See Trefethen's book for details.

I don't know if there are algorithms that take advantage of the fact that the polynomial has only 1 or 2 real roots. But the Chebfun one seems like it might be fast enough for your needs, anyway. A Newton-Raphson solver with good starting points would be even faster.

Also, I agree with user7530's assessment -- evaluating the polynomial is not likely to be the bottleneck on a modern computer, or even on a modern cell-phone or wrist-watch :-)

0
On

If you are absolutely sure that there is only one positive real root, then a combination of Newtons method and interval nesting a la bisection method or regula falsi should work. Pure Newton may fail by entering an infinite stable loop.

First determine the outer root radius R by one of the various formulas and take [0,R] as initial interval. Compute the values. Apply Newtons method as long as it stays in the interval, shrink the interval at each step by replacing the boundary with the same sign of the function value. If Newton fails to stay in the interval, compute the root of the secant as next iteration point.

One could enrich this by a counter of how long the same side of the interval gets changed to insert a bisection step every 5 or 10 iterations to really reduce the interval. Feel inspired by the Illinois variation of regula falsi.