Eliminate cases before calculting all KKT conditions

232 Views Asked by At

I have the following non linear programming to solve:

$$\left\{\begin{matrix} \min & (x-3)^2 + (y-2)^2 \\ s.t. & x^2 +y^2 \leq 5 \\ & x+y\leq 3 \\ & x \geq 0\\ & y\geq 0 \end{matrix}\right.$$

To check the first order kkt condition, I should check all the 16 cases, when restrictions are & are not active. I know the problem have a convex objective functiom and, also, the domain is compact (so itdoes have a minimum). But is there a fast way t find the minimum without checking all the cases? I would like to arque, for example, that I only need to check the vertix, but I dont know if it is correct. If it is, why? In general, is there a way to know what conditions Itrully have to check?

Thanks!

1

There are 1 best solutions below

0
On

In a convex set $S$, a face $F$ is a convex subset of $S$ such that for all $z_1,z_2 \in S$, if $\exists 0 < \lambda < 1$ such that $\lambda z_1 + (1-\lambda)z_2 \in F$ then $z_1,z_2 \in F$.

You can see that $S$ and the empty set are faces of $S$. For a point $z \in S$, the minimal face containing $z$ is the intersection of all the faces containing $z$ (it is still a face).

Suppose you minimize a concave objective (or maximize a convex objective). If $z$ is optimal, then the minimal face containing $z$ contains only optimal solutions. Indeed if $\lambda z_1 + (1-\lambda) z_2 = z$, since the objective is concave and $z$ is the minimum we have $$f(z) = f(\lambda z_1 + (1-\lambda) z_2) \geq \lambda f(z_1) + (1-\lambda) f(z_2) \geq \lambda f(z) + (1-\lambda) f(z) = f(z)$$ so the inequalities are equalities and $f(x) = f(z) = f(y)$.

The minimal face for a point in the interior is the whole set $S$ so if a point in the interior is optimal, every feasible solution is optimal. For a polyhedron (i.e. only linear inequalities), you see that the only points $z$ for which the minimal face containing $z$ is $\{z\}$ is the vertices and if $z$ is not a vertex, the minimal face containing $z$ also contains at least one vertex so you know that at least one of the vertices is optimal.

Here you have a quadratic constraint. For the points $z$ at the boundary of this constraint, the minimal face containing $z$ is also $\{z\}$ so they must be considered too.

Since a linear objective is concave, we see that for Linear Programming, at least one vertex is optimal, this is the classical result.

For your case however you minimize a convex objective so you cannot use this approach directly.

You know however that if the unconstrained optimal solution is feasible it is also the constrained optimal solution. If it is not the case, you know that the constrained optimal solution is not strictly feasible (in the interior of the feasible set) because otherwise it would be the unconstrained optimal solution (because your objective function is convex local optima are global optima). You cannot argue however that it is at one vertex or in the boundary of the quadratic constraint. You problem gives a good counter example.

Since your objective is $\|z - (3,2)\|_2^2$ you know that $(3,2)$ is the unconstrained optimal solution and if $(3,2)$ is not feasible the constrained optimal solution is the projection of $(3,2)$ in the $\|\cdot\|_2$ norm in the feasible set. You see that if you change $(3,2)$ to other constants, your projection can be any point of the border of your feasible set.