Maximize linear objective with quartic equality constraints

390 Views Asked by At

I have the following nonlinear programming problem:

  1. real variables, the number of variables is greater than the number of constraints.
  2. need to maximize a linear objective function
  3. Some constraints are cubic equality equations (all terms are degree 3).
  4. Some constraints are quadratic equality equations (all terms are degree 2).

Is there any developed method for solving such program? At last, is there software for solving such problem?

2

There are 2 best solutions below

0
On BEST ANSWER

You can try applying the KKT conditions (i.e., Lagrange multipliers) and then apply any standard method for unconstrained optimization: e.g., gradient descent, Newton's method, etc.

If you want to maximize $f(x)$, subject to $g_1(x)=\cdots=g_k(x)=0$, you can try setting

$$\Psi(x) = f(x) - \lambda g_1(x)^2 - \cdots - \lambda g_k(x)^2$$

and then maximize $\Psi(x)$, for some sufficiently large constant $\lambda$. A standard method is to start out with $\lambda$ being small and gradually increase it in each iteration of the optimization. This is a heuristic, as strictly speaking we might need $k$ different $\lambda$'s, but it might be sufficient in your use case. For instance, you could apply gradient descent or Newton's method to $\Psi(x)$.

Finally, you could express all equations as quadratic equations by adding extra variables. For instance, you can express $y=x^4$ as the quadratic constraints $y=t^2$, $t=x^2$ where $t$ is a new quadratic variable. This method can be used for arbitrary quartic equations; for each pair $x_i,x_j$ of variables, introduce a new variable $t_{i,j}$, add the constraint $t_{i,j} - x_i x_j = 0$ (which is a quadratic equation), and then replace each occurrence of $x_i x_j$ in any monomial of any quartic equation with $t_{i,j}$. You'll end up with maximizing a linear function subject to a bunch of quadratic equations. This could then be solved with an off-the-shelf QCQP solver.

0
On

Following the line of D.W.'s approach, I took a look at the methods of Lagrange multipliers. Finally I found the first order Lagrangian method (D. Bertsekas, Nonlinear programming,3rd) is well suited to my problem which contains of $N$ (hundreds) variables and $M$ (hundreds) equality constraints. $N>M$ holds.

First, construct the Lagrangian function $$ L(\mathbf{x},\lambda)=f(x)-\lambda_1 g_1(x)-\lambda_2 g_2(x)-\cdots $$ which introduce $M$ additional variables $\lambda_j$.

Second, take the partial derivative $$ \frac{\partial L(\mathbf{x},\lambda)}{\partial \mathbf{x}_i}= \cdots, \;\; \forall 1\leq i \leq N $$ Third, the iterative algorithm repeats the following: $$ \begin{cases} \mathbf{x}_i \leftarrow \mathbf{x} _i + \eta \frac{\partial L(\mathbf{x},\lambda)}{\partial \mathbf{x}_i},\;\; \forall 1\leq i \leq N\\ \lambda_j \leftarrow \lambda_j +\eta g_j(x) ,\;\; \forall 1\leq j \leq M \end{cases} $$ where $\eta$ is positive learning rate. I implemented the algorithm from scratch using Java. The error ($|g_j(x)|>0 $) rise at the beginning, and consequently smoothly vanish.