Lagrange multipliers with non-smooth constraints

1.8k Views Asked by At

I read in a textbook a passing comment that Lagrange multipliers are not applicable if there are points of non-differentiability in the constraints (even if the constraints are continuous). For example, in the following problem:

$\min_{\boldsymbol x} \boldsymbol{a} \cdot \boldsymbol{x}$

s.t. $\max(x_1, x_2) = x_3$

for vectors $\boldsymbol a \in \mathbb{R}^3$ and $\boldsymbol x \in \mathbb{R}^3$.

Why can't I use Lagrange multipliers here? If I push through the standard steps of constructing the Lagrangian, differentiating w.r.t. to variables and Lagrange multipliers, setting the partial derivatives to 0, and solving (assuming I'm able to), what goes wrong? Is there some related but alternative method that I can use?

Thanks in advance.

Edit: I'm ok finding any critical point. I'm looking for issues beyond the standard one that just because you find a point with gradient = 0 doesn't mean you've found the optimum.

2

There are 2 best solutions below

2
On BEST ANSWER

What goes wrong is that the minimum might happen to be at a point where a derivative might not exist. Try e.g. minimize $y$ subject to $g(x,y) = y - |x| = 0$. The minimum is at $(0,0)$, where $\frac{\partial g}{\partial x}$ does not exist.

0
On

In the last quarter-century, the subject of nonsmooth optimization has become an area of very active research. The pioneering book is Frank (Francis) Clarke's book Optimization and Nonsmooth Analysis. Clarke gave the entire subject its name. Another major figure is Rockafellar.

As your comment notes, a key tool is the subgradient. But that comment indicates some acquaintance with the subject, which likely makes this answer superfluous.