A global optima: $\text{max}_{x} \frac{1}{2}\left\| X (a + b) \right\|_2^2 \ \text{s.t.} \ a^T X b \leq \delta; 0 < x \leq 1$,$X := {\rm Diag}(x)$

63 Views Asked by At

How to find (using any software) a global optimum for such a (non-convex) problem

\begin{align} \text{maximize}_{x \in \mathbb{R}^{n \times 1}} \quad & \frac{1}{2}\left\| X (a + b) \right\|_2^2\\ \text{subject to }\quad & \operatorname{Re}\left\{a^H X b\right\} \leq \delta \\ \quad & 0 \leq x \leq 1 \end{align} where $X := \operatorname{Diag}(x) \in \mathbb{R}^{n \times n} $ creates a diagonal matrix by stacking the vector $x$ components along the diagonal, the vectors $a, b \in \mathbb{C}^{n \times 1}$ are given, and also $\delta$ is given.

Please suggest which solver should I use to solve? (I have tried to solve such a small problem via a exhaustive search and seems that I do get a solution...)


I tried using YALMIP (with BARON/BMIBNB as a solver) but get all NaNs as a solution (so it seems it doesn't solve the problem)...

% MATLAB code using YALMIP
clear variables    
a           = randn(8,1)+1j*randn(8,1); 
b           = randn(8,1)+1j*randn(8,1); 
x           = sdpvar(8,1);
Constraints = [real(a'*diag(x)*b) <= 1e-3; 0 < x <= 1];;
Objective   = -norm( diag(x) * (a+ b), 2);
options     = sdpsettings('verbose',1,'solver','baron');
%options    = sdpsettings('verbose',1,'solver','bmibnb');
sol         = optimize(Constraints,Objective,options);
value(x)
2

There are 2 best solutions below

3
On BEST ANSWER

As for your edited question: you want to maximize $\sum_i (a_i+b_i)^2 x_i^2$ subject to $a_ib_ix_i\le\delta$ for each $i$ and $0< x_i \le 1$ for each $i$.

When you relax your constraint to insisting only that $a_i b_ix_i\le\delta$ and $0\le x_i\le1$ for all $i$, the constraint set becomes compact, and the maximum of the convex objective function occurs at an extreme point of the relaxed constraint set. Let $[l_i,u_i]=\{t: a_i b_it\le\delta\}\cap [0,1]$; then the maximum occurs at a point $x$ for which each component $x_i$ is either $l_i$ or $u_i$. Since the coefficient of $x_i^2$ in the objective function is non-negative, it is obvious that the maximum of the relaxed problem occurs when $x_i=u_i$ for each $i$.

This point lies in the original constraint set, so it solves your unrelaxed optimization problem as well.

1
On

Answered on the YALMIP forum (2-norm is an operator which requires convex model, so one should use a form with a quadratic objective instead, i.e.

Objective   = -(diag(x) * (a+ b))'*(diag(x) * (a+ b));