Box-constrained QCQP

131 Views Asked by At

Let ${\bf A} \in \mathbb{R}^{n \times n}$ be a symmetric positive semidefinite (PSD) matrix, let ${\bf a} \in \mathbb{R}^n$ and let ${\bf B} \in \mathbb{R}^{n \times n}$ be a symmetric indefinite matrix with symmetric eigenvalues ($\pm \lambda_i \neq 0$). Let $c > 0$. Consider the optimization problem

$$ \begin{array}{ll} \underset {{\bf x} \in \mathbb{R}^n} {\text{minimize}} & {\bf x}^\top {\bf A} \, {\bf x} + {\bf a}^\top {\bf x} \\ \text{subject to} & {\bf x}^\top {\bf B} \, {\bf x} = 0\\ & 0 \le x_i \le c \end{array} $$

I am aware of the fact that this is not a convex problem because of the quadratic equality constraint. Nevertheless, I did find a paper that proposes methods for quadratic optimization under just one quadratic equality constraint. But this doesn't quite fit my problem.

How would one tackle it, and ideally, are there good free solvers available?

2

There are 2 best solutions below

5
On BEST ANSWER

This is really more of a comment than a complete answer, but in case you are not aware, a slightly simpler version of your quadratic program can be reformulated as,

\begin{align} \text{minimize}\;\;&\langle A,X\rangle\\[4pt] \text{subject to}\;\;&\langle B,X\rangle=0\\[1pt] &\text{rank}(X)=1\\[1pt] &X\succeq 0 \end{align}

where the final two constraints ensure that $x$ may be recovered from $X$ via its outer product (i.e. $X=xx^T$).

Your actual problem can be handled in a similar fashion by first using a process called homogenization.

The non-convex aspect has now been isolated in the rank constraint, which, if removed, will yield a semidefinite relaxation of your quadratic program, this relaxation is canonical, and people have studied it in detail (I have not however).

1
On

The problem that you have can be written as:

$$ \begin{array}{ll} \underset {{\bf x} \in \mathbb{R}^n} {\text{minimize}} & \langle {\bf A}, \, {\bf X}\rangle + \langle {\bf a}, {\bf x}\rangle \\ \text{subject to} & \langle {\bf B}, \, {\bf X}\rangle = 0\\ & \begin{bmatrix}1 & {\bf x}^\top\\ {\bf x} & {\bf X}\end{bmatrix} \succeq 0\\ & \text{rank}({\bf X}) = 1\\ & 0 \le x_i \le c \end{array} $$

Removing the rank constraint yield to an SDP problem that should be easy to solve using any SDP solver which gives you lower bound on your problem. In the litterature there is some techniques to convexify rank constraints. The basic approach is to use what we call an iterative rank minimized approach. You can Google it and you will find multiple paper on the subject.