[update 2] Urgghh - the time-consumption really stems only from the construction of the h-order polynomial. The time for finding the roots (only 10 to 20 times Newton-iteration because of my nice start-intervals) is then completely neglectable.
So the effectiveness-aspect of my question has been solved, as it depends on the false premise of a costly Newton-iteration.
Only the more qualitative question of the existence of some procedure for finding only the few real roots of a polynomial remains open (but may be have the answer to the negative).
I have a family of polynomials of increasing order (see my discussion at the earlier MSE-thread), and have the strong hypothese, that each of that polynomials has either 1 or 2 real roots, and always exactly one positive - is then there an algorithm to find just that one real root?
I looked at "polsturm" in Pari/GP which allows to determine the number of real roots, but not also their value - I still have to use "polroots" which computes all complex roots.
Well, I have also a hypothese of the rough value of that root, so I tried also the inbuilt solver with a reasonable small interval.
But with polynomials of order 200 both methods need fairly time, and with order of 1000 the solver needed about 10 minutes...
Perhaps there is some Horner-like scheme? So my question is:
Q: Is there an efficient algorithm for finding the real roots of a high.order polynomial if I know there are at most 2?
[update] Using the solver-method I get that time-consumption $$ \small \begin{array} {r|r} \text{order} & \text{msec} \\ \hline \\ 62 & 31 \\ 125 & 296 \\ 250 & 3292 \\ 500 & 39967 \\ 1000 & \gt 400000 \end{array}$$
This is the relevant part of the code in Pari/GP
glpc =[0] \\ introduce a global accessible vector holding the coefficients
\\ of the current polynomial
{evalpoly(x)= local(f,su); \\ the Horner-scheme evaluator of the polynomial
su=0;f=1; \\ works on the global vector glpc[]
for(k=1,#glpc,su+=f*glpc[k];f*=x);
return(su)}
{mypolroot(h)= local(xl,xu,res); \\ creates the polynomial of order h
\\ finds the real root in the interval
\\ depending on h by my hypothese
glpc = polcoeffs (makepoly(h)); \\ put the coeffs of the h'th polynomial
\\ into glpc
xl=floor(h*gm/2); xu = ceil( h*gm/2+5); \\ "magic" lower and upper bound
\\ for the positive real root
res = solve(x= xl,xu , evalpoly(x) );
return(res);}
You say you have a good guess for the location of the root -- you've tried Newton's method?
If Newton's method doesn't converge, i.e. your guess is too inaccurate, you can take advantage of the knowledge that there is only a single positive root to bracket the root inside the interval $[a,b]$, with $f(a)f(b) < 0$. Worst case you can use $a=0$ and find a $b$ by successive doubling. Then you can run the Ridders' method. This method is guaranteed to converge, and at worst linearly, though it converges much faster in practice once you're near the root.
EDIT: Also, I feel like I should mention that I've used the Jenkins-Traub algorithm (C code here) in my research projects in the past, as the first (and almost always last) stop for finding polynomial roots. But finding all roots seems overkill when you're only looking for one and can bracket it effectively.