Line search using secant method

321 Views Asked by At

To find the root of a strictly increasing function $f(x)$, given two test points $x_0, x_1$ such that $f(x_0) < 0$ and $f(x_1) > 1$, we can use the secant method, which has the following iteration scheme:

$$\begin{align} x_2 &:= x_1 - f(x_1) \frac{ (x_1 - x_0)} {f(x_1) - f(x_0)} \\ x_0 &:= x_1\\ x_1 &:= x_2\\ \end{align}$$

Minimizing a (strictly) convex univariate function $f(x)$, means finding the zero of its (strictly) increasing derivative $g(x)$. A common way to do this exact line search is the golden-section method. But since univariate derivatives are easy to compute numerically (choose an arbitrary small $h$ and compute $g = \frac{f(x+h) - f(x)}{h}$), why not use the secant method instead? The iteration scheme is

$$\begin{align} x_2 &:= x_1 - g(x_1) \frac{ (x_1 - x_0)} {g(x_1) - g(x_0)} \\ x_0 &:= x_1\\ x_1 &:= x_2\\ \end{align}$$

and experimentation reveals that this converges relatively quickly toward a local minimum.

I ask because the line-search chapter in my textbook skips straight from the golden-section method to inexact line search techniques. The secant method doesn't come up until the chapter on quasi-Newton methods, where it amounts to a technique for estimating the multivariate Hessian.

I can see that the golden-section method has advantage of not needing to compute the gradients, but is there any situation in which this secant-method exact line search might be preferable?