Convex optimization approximation

259 Views Asked by At

Consider the optimization problem $\mathcal{P}_0$

\begin{equation*} \begin{aligned} & \underset{x\in \mathbb{R}^2}{\text{minimize}} & & \left\| x-p \right\|^2 \\ & \text{subject to} & & \ A x \leq b, & \ \ x_1^2 + x_2^2 = 1 \end{aligned} \end{equation*}

where $p \in \mathbb{R}^2$ is a parameter, and $x^*(p)$ is the optimal solution.

I am looking for a $convex$ optimization problem $\mathcal{P}$ such that $x^*(p)$ "approximates" the optimal solution of $\mathcal{P}$.

1

There are 1 best solutions below

0
On

There is a whole field devoted to this problem. Look up material on semidefinite relaxations, sum-of-squares and moment methods. Papers by Jean Bernard Lasserre, such as "Global optimization with polynomials and the problem of moments" SIAM J. Optimization 11, pp 796--817. " might be a good start.

There is software for the problem too, such as the MATLAB toolboxes sostools, gloptipoly and YALMIP. For reference, here is a quick test with YALMIP (developed by me). I solve the naive relaxation discussed in the comments (which typically yields a poor solution), the global optimization problem using YALMIPs built-in global solver (solving a nonconvex quadratic problem in $R^2$ is trivial), and using a semidefinite relaxation (which typically solves the problem here exactly, i.e., the semidefinite relaxation is tight and the solution $x$ can be recovered)

% Random problem
p = randn(2,1);
A = randn(10,2);
b = A*[1;0] + rand(10,1);
p = randn(2,1);

% Naive relaxation
x = sdpvar(2,1);
Constraints = [A*x <= b,x'*x <= 1];
Objective = (x-p)'*(x-p);
solvesdp(Constraints,Objective)
% Display solution
[double(x)' double(Objective)]

% True global solution
Constraints = [A*x <= b,x'*x == 1];
Objective = (x-p)'*(x-p);
solvesdp(Constraints,Objective,ops)
ops = sdpsettings('solver','bmibnb');
[double(x)' double(Objective)]

% First semidefinite relaxation
[~,extractedsolutions] = solvemoment(Constraints,Objective,[],1)
relaxdouble(Objective)
extractedsolutions{1}