Consider an optimization problem:
$\max\limits_{\substack{x_1, x_2}} x_1 + x_2$
s.t. $2 \sqrt x_1 + x_2 \leq y$
$x_1, x_2 \geq 0$
In order to solve it, I set up the Lagrangian:
$\mathcal{L}(x_1, x_2) = x_1 + x_2 - \lambda (2 \sqrt{x_1} + x_2 \leq y) + \mu_1 x_1 + \mu_2 x_2$
The F.O.C are:
$\dfrac{\partial \mathcal{L}}{\partial x_1} = 1 - \dfrac{\lambda}{\sqrt{x_1}} + \mu_1 = 0$ (1)
and other stationary conditions for $x_2, \lambda, \mu_1, \mu_2$.
It can be seen that the F.O.C conditon (1) is only defined when $x_1 > 0$, it will be undefined at $x_1 = 0$ (similarly when $x_2 = 0$). Therefore, we need to consider the cases when $(x_1, x_2)$ are at the boundary of the feasible set, i.e. $(0,y)$ and $(y^2/4, 0)$ in the problem above. So my questions are:
- Do the Kuhn-Tucker conditions fail to be applied at boundary where constraint functions' derivatives are undefined?
- If we can not apply the Kuhn-Tucker(KT) conditions at boundary, it means we are going to miss some possible extrema at boundary if we only use KT condtions, right? How we know that, for a specific optimization problem, which points should be checked separately without using the KT conditions?
Yes, the KKT conditions assume that the functions involved (objective function, constraints) are differentiable. For non-smooth problems everything becomes more complicated. For convex functions you can define the concept of a subdifferential and formulate the optimality conditions in this way. One can also extend the concept of a subdifferential to locally Lipschitz functions - but this assumption is not satisfied in your case ($\sqrt{\cdot}$).
Instead of going this complicated route I would rather reformulate the problem by restating the contraints using differentiable functions, for example $x_2 \leq y$ and $4x_1 \leq (y-x_2)^2$ (in addition to your existing nonnegativity constraints for $x_1$, $x_2$).
Finally, since your problem only involves two optimization variables, it may be solved graphically by sketching the feasible region.