I know the Lagrangian comes indirectly from the implicit function theorem (so don't worry about that nightmare) but does anyone know a good proof for the following theorem:
Consider the optimization problem of $ Max[f(x,y)]$ subject to $g(x,y) \le m$ Let $(a,b)$ be a solution and assume:
1.The gradient vector of $g(a,b)$ does not equal zero.
2.The implicit function theorem holds.
If $ f:R^2\to R$ and $g(x,y) \le m$ then there exists $ \lambda $ such that:
$ L(x, y, \lambda) = f(x,y) - \lambda(g(x,y) - m)$
$\frac{dL}{dx} = 0$ $\frac{dL}{dy} = 0$ $\lambda \ge 0$ and $\lambda(g(x,y) - m) = 0 $
Thanks!
The constraint inequality and constraint just with g(x,y) are similar problems. The difference is that after solution is arrived at by Euler Lagrange, for the inequality we can find a boundary zone when we had only a border line without inequality part $ \lambda(g(x,y) - m) $ in operation.