Understanding Lagrangian for LP

25 Views Asked by At

From Boyd & Vandenberghe's Convex Optimization: enter image description here

While it is rue that a linear function is bounded below only when it is ZERO, in the highlighted part didn't we ignored the constrained that $x \succ 0$? Why?

If x is all positive than minimum value of g could be when $c + A^T\nu - \lambda \neq 0$. Specifically when $c + A^T\nu - \lambda \succ 0$

1

There are 1 best solutions below

0
On

The point of introducing the Lagrange function $L$ is to drop the constraints on $x$. The primal problem is actually equivalent to the unconstrained minimization of the function $f(x) = \sup_{\lambda\ge 0,\nu} L(x,\lambda,\nu)$, because the supremum inside $f$ becomes infinite if the original constraints on $x$ are violated (including the non-negativity constraint $x \ge 0$).

That is you have the unconstrained optimization problem $\inf_x f(x)$, which is related to the dual problem through the max-min inequality: $$ \sup_{\lambda\ge 0,\nu} g(\lambda,\nu) = \sup_{\lambda\ge 0,\nu}\inf_x L(x,\lambda,\nu) \le \inf_x\sup_{\lambda\ge 0,\nu} L(x,\lambda,\nu) = \inf_x f(x) . $$ This is the weak-duality theorem, and the dual form (its left-hand side) is being simplified in your text.

So then notice that the infimum over $x$ in the dual function $g$ is also unconstrained (the original non-negativity constraints on $x$ has been dropped), and so if $c + A^T\nu - \lambda \neq 0$, then its value becomes negative infinity.