Lagrangian optimization leads to a contradiction

333 Views Asked by At

Suppose I want to

$$\begin{array}{ll} \text{maximize} & x\\ \text{subject to} & x + y^2 \le 0\end{array}$$

Graphically, I obtain $(x^*,y^*)=(0,0)$ as the optimal solution. I obtain the same result using the Lagrangian approach. However, if I change the constraint to $x^3+y^2 \le 0$

$$\begin{array}{ll} \text{maximize} & x\\ \text{subject to} & x^3+y^2 \le 0\end{array}$$

then I get $(x^*,y^*)=(0,0)$ graphically but the Lagrangian approach leads to a contradiction ($x=0$ and $x\ne0$).

I am trying to understand why this happened when I changed the constraint. My intuition tells me it has something to do with the differentiability of the constraint. The function $x+y^2=0$ is differentiable at $0$ if we view $x$ as the dependent variable but not differentiable if we view $y$ as the dependent variable. On the other hand, the function $x^3+y^2=0$ is never differentiable at $0$, whether $x$ or $y$ is viewed as the dependent variable.

Is my intuition correct? Any tips on how to proceed?

1

There are 1 best solutions below

6
On BEST ANSWER

No, the constraints are differentiable in both cases. You don't need to view them as implicit functions.

Basically, for the Lagrange multipliers method to work, the constraints need to satisfy some sort of regularity conditions (Wikipedia has a good list:https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions#Regularity_conditions_.28or_constraint_qualifications.29)

The issue is that the second problem does not satisfy the regularity conditions, which in the case of only one constraint boil down to the gradient of the constraint not being equal to zero at the solution.

In its more general form (which leads to the Fritz-John conditions) the Lagrangian for the second problem will look like $L(x,y,\lambda_0,\lambda_1)=\lambda_0x+\lambda_1(x^3+y^2)$ with an additional condition that $\lambda_0^2+\lambda_1^2\neq 0$. You can see that (0,0,0,1) is a feasible point in this case.