I need to code an algorithm that finds the root of a function $f$, such that $f(x)=0$. I can assume that I have identified an interval $[a,b]$ with $f(a)<0$ and $f(b)>0$ where the function is monotone and continuous, and hence I know that there is a solution to $f(x)=0$. I am also able to calculate the derivative $f'(x)$ at any point.
I also know that there are other solutions to $f(x) = 0$ outside of $[a,b]$ but I am only interested in the solution inside the interval. I could use Newton's method, but that may overshoot the interval and find the wrong solution. I could also use the bisection method but that would be too slow. How could I combine Newton's method with the bracketing such that I have a guaranteed fast convergence to the correct root?
One idea I had was to use Newton to update the point with the smallest absolute function value (e.g, update $a$ if $|f(a)|< |f(b)|$), updating the interval boundaries based on the sign of the new estimate, or use the bisection method if the updated estimate fell outside the previous interval. How would you do it?
If you look here, you will find a subroutine named RTSAFE from Numerical Recipes which does exactly what you need. This one is coded in C but you will find it in almost any other programming language.
You would find here the Fortran version of it.
You could even modify it easily if you want to use higher order methods (such as Halley or Householder).