Say I want to find the root of $f(x) = e^{-x} - 5$, and assume I start with initial guesses $x_0 = -3$ and $x_1 = 3$.
I define my update function as $x_i = x_{i-1} - f(x_{i-1}) * \frac{x_{i-1} - x_{i-2}}{f(x_{i-1}) - f(x_{i-2})}$
Within a few iterations, the denominator, $f(x_{i-1}) - f(x_{i-2})$ becomes incredibly small and the entire fractional piece is on the order of $10^{-17}$.
In this case, using double precision we fail to update our $x_i$ as a subtraction with a magnitude less than $10^{-16}$ is discarded here. We then get $x_i = x_{i-1}$ due to subtracting $0$ and then fail to ever converge.
Is there some quantification of how one should choose initial guesses based on the potential of running into this? Or is there a more robust way to run the optimization that is more stable?