I'm implementing a root-solver for finding x coordinates of a function f(x), after I have an y-coordinate. The function is periodic, roughly sinusoidal with constant amplitude but non-linearly varying frequency; for an inverse I don't have a closed-form (it is an infinite series), so I use the Newton iteration to find the x-value at a given y beginning the iteration at $x_0$ which is rather near the true value by something like $x=newton(x=x_0,f(x)-y)$.
In most cases this works fine, however if the y is in the near of a maximum (or minimum) of f, where the shape is very similar to the maximum of a sinus-curve, the newton-iteration does not converge. The wikipedia gives a bit of information about this, but not a workaround. The last way out would be to resort to binary search, but which I'd like to avoid since the computation of f(x) is (relatively) costly.
Does someone know an improvement in the spirit of the Newton-iteration (which has often quadratic convergence) for this region of y-values?
[update] hmmm... perhaps it needs only a tiny twist? It just occurs to me, that it might be possible to go along the way how I find the easily approximated x for the maximum y: here I use the Newton on the derivative of the function and search for the zero:
$x_{max}=newton(x=x_0,f(x)') \qquad $ and this has the usual quadratic convergence. But how to apply this for some y in the near of the maximum?
Have you considered implementing a hybrid method? For example, at each step:
IF a Newton step would result in an iteration that is outside the bounds where you have determined the root must lie, then take a bisection step (slower than Newton, but bisection always converges to a root and is not affected by extrema), or a step using a method other than Newton that is not prone to failing near extrema.
ELSE proceed with a Newton step (since it converges quadratically, as you pointed out).