Newton Method to find the Maximum value

7.8k Views Asked by At

Most of time Newton Method in optimization is used to find the local minimum of a function. I am wondering what would happen if we have an maximization problem. What happened to the update equation in the maximization case:

X_k+1= x_k-t*dx   OR    X_k+1=x_k+t*dx

This question was asked here

Basic Question about Newton's Method for Optimization

However there two different and opposing responses and they both some votes. I would like to know can anyone clarify this ambiguity.

1

There are 1 best solutions below

3
On BEST ANSWER

That's because it depends a bit on which Newton method you refer to.

In the one case, it's Newton's root-finding algorithm applied to the gradient of the function: this method will find a local extremum which may be a minimum or a maximum (or a saddle point). To find which, you need further exporation (for instance, looking at second order information or at the values of the function at different extrema).

In the other case, it's the Newton gradient descent method. In this case we take steps in direction of the gradient $\nabla f$ to increase the function (to find maxima) and in the direction of the negative gradient $-\nabla f$ to decrease the function (to find minima).