Variable fixation for solving quadratic program

30 Views Asked by At

I have read the following article: https://link.springer.com/article/10.1007/s10898-011-9713-2

The following theorem is stated without proof:

Let $x^*$ denote the optimal solution of $\min\limits_{x \in [0,1]^n} f(x) = \frac{1}{2}x^{\text{T}}Qx + c^{\text{T}}x$, where $Q$ is an $n \times n$ symmetric matrix and $c \in \mathbb{R}^n$. Let \begin{align*} l_i &= c_i + \frac{1}{2}q_{ii} + \sum_{j \neq > i}\min(0,q_{ij}) \\ u_i &= c_i + \frac{1}{2}q_{ii} + \sum_{j \neq > i}\max(0,q_{ij}). \end{align*}

(i) If $l_i > 0$ then $x_i^* = 0$;

(ii) If $u_i < 0$ then $x_i^* = 1$.

They said, that the proof is given by https://link.springer.com/article/10.1007/BF02247879

But I can't reconstruct the proof. In the second paper they considered also the problem $\min\limits_{x \in [0,1]^n} f(x) = \frac{1}{2}x^{\text{T}}Qx + c^{\text{T}}x$. Then they said that variables, whose partial derivatives have fixed sign in $[0,1]^n$ can be forced to $0$ or $1$. First of all: What does is mean to have fixed sign in $[0,1]^n$?