Lagrangian multiplier finds local optima for non-convex function?

2.6k Views Asked by At

As said in here, Lagrangian multiplier is a strategy for finding the local maxima and minima of a function subject to equality constraints.

I'm trying to figure out how this method finds the "local maxima and minima" of a function, regardless that it's convex or non-convex.

The basic idea of Lagrangian multiplier is that, given the contours of objective function and constraint function, at the point where both contours tangentially touches each other, the gradients of both could be aligned.

So if the objective function is convex, the solution is global optima, but I'm just not sure when it's non-convex, the solution would be "local optima"? I presume it's because, this method could only discover the solution at the tangent point, but not the point where the two contours intersect, is that right?

2

There are 2 best solutions below

2
On BEST ANSWER

None of the other answers discusses the effects of constraints. They simply discuss what can happen when the derivative of the objective is 0. The question is specifically about Lagrange's method. For solving $\min\{f(x) : g(x)=0\}$, Lagrange pointed out that the following condition is necessary for optimality: $$\nabla f(x) + \lambda \nabla g(x)=0, \quad g(x)=0.$$

The condition is not a sufficient condition, even if the objective is convex. Take for example $\min\{x^2 : \cos(x) = 0\}$. Although $x^2$ is convex, any $x=0.5\pi + k \pi$, $k\in\mathbb Z$ satisfies the Lagrange equations (with $\lambda=2x$ or $\lambda=-2x$). Lagrange's condition is sufficient if the objective to be minimized is convex and the equality constraint is affine.

Problems with more than one constraint (i.e., when $g(x) \in \mathbb R^m$ with $m>1$) are harder because Lagrange's method no longer offers a necessary condition. However, if the gradients of the equality constraints are linearly independent at the optimum or if the constraints are affine, Langrange's method is a necessary condition (source).

4
On

The Lagrange multiplier method finds all stationary points of the constrained problem in the interior of the feasible region; that is, all points at which varying the variables while satisfying the constraints leaves the objective function invariant to first order. These can be local minima, local maxima or saddle points, just like in an unconstrained problem. If a global optimum is in the interior of the feasible region, the method finds it, along with all other stationary points in the interior. If the global optimum is on the boundary of the feasible region, it won't be found; thus, to find the global optimum, you need to compare the value of the objective function at the stationary points with the values on the boundary.