KKT for not convex problems

159 Views Asked by At

In my optimization course we learned something about KKT for not konvex problems:

$$min \; f(x)$$ $$s.t. \; c(x)=0$$ $$d(x)\geq 0$$ $$f(x): \mathbb{R}^n\rightarrow \mathbb{R}$$ $$c(x): \mathbb{R}^n\rightarrow \mathbb{R}^m$$ $$d(x): \mathbb{R}^n\rightarrow \mathbb{R}^p$$

KKT: $x_*$ is a local optimum if the constraints fulfill LICQ for $x_*$ and there are unique multipliers $\lambda_*\in\mathbb{R}^m,\mu_*\in \mathbb{R}^p$, so that: $$\nabla f(x_*)+\nabla c(x_*)\lambda_*-\nabla d(x_*)\mu_* = 0$$ $$c(x_*)=0$$ $$d(x)^T\mu_* = 0$$ $$d(x_*),\mu_*\geq 0$$ and the hessian of $L(x_*,\lambda_*,\mu_*)$ is positive semidefinite.

The first KKT constraint looks similar to the one for convex problems. The part I dont understand is the operation between $\mu$ and $\nabla d(x_*)$. Is it a component wise multiplication? Are $d(x_*)$ and $c(x_*)$ representations of multiple constraints such as $Ax=b$? How do I obtain a 0 although I add vectors? Isn't the only possibility to fulfill $d(x)^T\mu_* = 0$ $\mu=\mathbb{0}_p$?

A given example problem was: $$min \; -4x+2y$$ $$s.t.\; −(1−x)^4+y^2\leq0$$ $$−x^2−4y\leq−4$$

So is $d(x): \mathbb{R^2}\rightarrow\mathbb{R}^2$?

1

There are 1 best solutions below

0
On

For vectorial constraints, the lagrangian multipliers is represented by a vector.

Example:

$\mathbf{d}(\mathbf{x}) \leq \mathbf{0}$ is represented as $\mathbf{\mu}^T\mathbf{d}(\mathbf{x}) = 0.$

For matrix constraints, you should use a matrix for the multipliers in the trace operator.

Example: $\mathbf{A}(\mathbf{x}) \leq \mathbf{0}$ is represented as $trace \{\mathbf{A}(\mathbf{x})\mathbf{M}\} = 0$, where $\mathbf{M}$ is the matrix of lagrangian multipliers.

Source: Convex Optimization (S. Boyd and L. Vandenberghe).