Is there any robust method for finding stationary points of a nonlinear, non-convex function of several variables?

122 Views Asked by At

Suppose that I have a family of $f: \mathbb{R}^N \to \mathbb{R}$ non-linear, non-convex, but generally well behaved functions (eg. no discontinuities beyond some numerical noise).

Is there a numerical algorithm that robustly converges to the nearest (measured via the Euclidean norm of the vector connecting the initial and final points) stationary point (any point with $||\nabla f|| < \epsilon$ where $\epsilon$ is some threshold to accommodate for finite precision arithmetic), even when stared from an arbitrary initial point?

Finding local minima is less of a concern here, as there is an arsenal of tricks one can employ to eventually arrive at a minimum, but finding saddle points is much more challenging.

Newton's method is known to be able to converge to stationary points other than minima, but it is also known that it is not globally convergent, and can easily diverge or get trapped in an oscillation, unless it is started sufficiently near a stationary point.

So far, most literature I have found focused on avoiding saddle points, but I am intentionally trying to find them.

One approach that I have concocted myself was to attempt to directly minimize the Euclidean norm of the gradient via Newton's method, using numerical differentiation to calculate the gradient and Hessian of the gradient norm. But in practice, this was not successful, the optimization always seemed to get stuck eventually. (although I need to double-check my code for implementation mistakes)