QCQP with bilinear constraints

158 Views Asked by At

I have the optimization problem in $u := \left(u_{1},u_{2}\right)$

$$\begin{array}{ll} \text{minimize} & \frac{1}{2}u^{\top}\Sigma u-au^{\top}\beta\\ \text{subject to} & u_{1}u_{2}\le0\end{array}$$

where matrix $\Sigma$ is positive definite and $a>0$.

The first-order condition is

\begin{equation} \nabla f=\Sigma u-\beta a=0\Rightarrow u^{*}=a\Sigma^{-1}\beta=\left(\begin{array}{c} u_{1}^{*}\\ u_{2}^{*} \end{array}\right) \end{equation}

If $u_{1}^{*}u_{2}^{*}>0$ and we assume $f\left(u\right)$ attain minimum at $u^{\#}=\left(\begin{array}{c} u_{1}^{\#}\\ u_{2}^{\#} \end{array}\right)$, do we have $u_{1}^{\#}u_{2}^{\#}=0$?

Does anyone know how to show it in a rigorous proof?

2

There are 2 best solutions below

1
On BEST ANSWER

Sure, in any problem with a single inequality constraint $$\min_x\ f(x) \quad \mathrm{s.t.} \quad g(x) \leq 0,$$ whether convex or not, the solutions will be of two types:

  1. unconstrained minimizers of $f(x)$ with the inequality constraint inactive, $g(x)<0$;

  2. constrained minimizers with the inequality constraint active, $g(x)=0$.

The geometry here is fairly intuitive: if $g(x)<0$ and yet you're not not at an unconstrained minimum of $f(x)$, you can descent down $-\nabla f$ until you either hit the boundary $g=0$ or reach a minimizer of $f(x)$. This argument can be made rigorous with enough regularity assumptions on $f$ and $g$, etc.

If you can prove there are no solutions of type (1) then all must be of type (2).

0
On

I have solved the problem based on user7530's comment.

Here is a proof: Assume $f$ attains minimum at $u^{0}\in\left\{ x\in\mathbb{R}^{2}:x_{1}x_{2}<0\right\}$. Then there exists $r>0$ such that $B\left(u^{0},r\right)\subset\left\{ x\in\mathbb{R}^{2}:x_{1}x_{2}<0\right\} $

Since $f$ is strictly convex, we have $f\left(u^{*}\right)<f\left(u^{0}\right)$ And since $u^{*}\notin\left\{ x\in\mathbb{R}^{2}:x_{1}x_{2}\le0\right\} $, there exists $u^{1}\in B\left(u^{0},r\right)$ such that $u^{1}=\lambda u^{0}+\left(1-\lambda\right)u^{*}$ with $\lambda\in\left(0,1\right)$, then $f\left(u^{1}\right)<\lambda f\left(u^{0}\right)+\left(1-\lambda\right)f\left(u^{*}\right)<f\left(u^{0}\right)$

contradiction! Hence $f$ attains minimum at $u^{0}\in\left\{ x\in\mathbb{R}^{2}:x_{1}x_{2}=0\right\} $.