How to formulate Augmented Langrangian as an objective function to SOS problem?

83 Views Asked by At

I am implementing ADMM algorithm to solve a complex optimization problem with SOS constraints. The x-update of the algorithm looks like this:

$$ x^{k+1}= {\arg\min}_{x \in S} \|x-z^{k}+\lambda^{k}\|^2_F$$

where S is an indicator function implying the satisfaction of some sum-of-squares constraints.

I'm looking to use SOSTools in MATLAB to solve this problem, but SOS programs only allows for a linear objective function, but in my case, it is quadratic. Although I have seen that people have solved such problems in literature, I'm kind of clueless. Any help will be appreciated.

1

There are 1 best solutions below

0
On

First, you seem to confuse limitation of the theory (sum-of-squares programming) and limitations of a particular tool (sostools).

Second, I suspect you confuse what is the objective of the original problem, and what is the objective of the sum-of-squares program.

Third, do you really start with a sum-of-squares representations of the feasible set in the independent $x$, or is there some underlying problem where you start with a polynomial constraint? It sounds extremely weird that you would have a sum-of-squares representation of that set to begin with, and that you then want to lift that one again? I hope you mean you want to use sum-of-squares representation of $S$ when minimizing your objective.

Regardless, here is an implementation in YALMIP, assuming you have a polynomial representation of $S$. Note that polynomial optimization via sos requires you to derive a sum-of-squares representation of the optimization problem, i.e. the objective function in your original model is not the objective in the sum-of-squares model, and to cope with constraints, you have to apply the positivstellensatz and introduce multipliers.

Trivial example minimize $(x-5)^2$ over $x^4 + x \leq 1$

 sdpvar x
 sdpvar t
 p = (x-5)^2;
 g = 1 - x - x^4;
 % Multiplier
 [s,c] = polynomial(x,2);
 % p(x) >= t when g(x) >= 0
 Model = [sos(p-t-s*g), sos(s)];
 % Maximize lower bound t
 optimize(Model,-t,[],[c;t])
 value(t)

https://yalmip.github.io/tutorial/sumofsquaresprogramming/

Having said that, people often have way too high expectations of sum-of-squares (and fail to see that it typically doesn't compute a solution but only computes a bound on achievable objective). For solving your standard nonlinear polynomial optimization problem, you are typically much better off simply using a global optimization solver.