I want to know when the penalty and the barrier method for constrained optimization cannot be applied and why. I would hope for an intuitive explanation, and I'm also wondering when this 2 methods even if can be applied, when do they fail? i mean, is there cases for these two methods that make them behave badly for a specific problem?
I hope you can help me. Bye!
Candidly, they both fail. A lot. For unconstrained and equality constrained optimization, we get nonlinear systems that we need to satisfy. For unconstrained, it's $$ \nabla f(x)=0 $$ for equality constrained problems, it's $$\begin{array}{rl} \nabla f(x) + g^\prime(x)^*y =&0\\ g=&0 \end{array}$$ Since we have nonlinear systems, we can attack them with something like Newton's method for nonlinear systems and then globalize the algorithm, so that they, in theory, converge from an arbitrary starting point. For inequality constrained problems, we get more than just a nonlinear system to solve. For example, for inequality constrained problems of the form $\min\limits_x\{f(x) : h(x)\geq 0\}$, we have optimality conditions $$\begin{array}{rl} \nabla f(x) - h^\prime(x)^*z =&0\\ h(x)\geq &0\\ z\geq &0\\ h(x)\circ z =&0 \end{array}$$ where $\circ$ denotes the pointwise product. Anyway, the inequalities mean that we can't just apply a Newton method and then fix it with globalization. As such, we need some kind of gimmick to make the problem look like an equality constrained problem. And, as it turns out, there's a lot of them. If we pretend the inequalities are really equalities, we end up with active set methods. If we just ignore the inequalities and then project the answer, we get projection methods. If we push the inequalities into the objective, we get barrier or penalty methods. If we relax the complementary slackness condition into $h(x)\circ z=\mu e$ where $e$ is the vector of all ones, we get interior point methods. However, in each of these cases, we don't really get to attack the problem directly like we do with equality and unconstrained optimization problems. Newton's method attacks the problem directly. Though, again, it has to be fixed with some kind of globalization to make it work. With inequalities, we have to futz with things and that causes problems. Did we futz to much? Did we relax our futzing too fast or too slow? Each of those algorithms I listed above has parameters that have to be set by hand and tuned during the course of the algorithm. That's never done perfectly and there are always problems where it'll fail. Anyway, that's sort of an amorphous answer as it really depends on the problem and the algorithm.