Minimum/Maximum of a quadratic function of two variables subject to a quadratic constraint

364 Views Asked by At

Let $r = [x, y]^T $, define the quadratic function

$f(r) = r^T A r + b^T r + c $

I want to find the minimum/maximum of $f(r)$ subject to the quadratic constraint

$C(r) = r^T D r + e^T r + f = 0 $

where $A, D$ are symmetric $2 \times 2$ matrices, $b, e$ are $2 \times 1$ vectors, $c, f$ are scalars. It is further assumed that $D$ is positive definite.

How is this achievable ? What are the steps to finding the points $(x,y)$ that minimize/maximize $f(r)$ ?

For example, find the minimizing/maximizing points $(x,y)$ of the function

$f(x,y) = x^2 + 2 xy + 3 y^2 + 2x - y + 5 $

Subject to

$C(x,y) = 3 x^2 + 2 y^2 + xy + 2 x + 5 y - 10 = 0 $

Also, how would you tackle the general case where the number of variables is $n \ge 3 $?

This is an exercise in constrained optimization.

1

There are 1 best solutions below

0
On BEST ANSWER

In the most general case of optimization problems that involve quadratic objective function $f$, not necessarily convex, with quadratic constraints, the problem can be written as: \begin{align*} & \text{minimize} \ f(\pmb{x}) = \pmb{x}^T A_{0} \pmb{x} + 2 \pmb{b}_{0}^T \pmb{x} + c_{0} \\ & \text{subject to: } \tag{P}\\ & \qquad \pmb{x}^T A_{i} \pmb{x} + 2 \pmb{b}_{i}^T \pmb{x} \leq c_{i} \quad \forall i = 1, \dots, m \end{align*} with variable $\pmb{x} \in \mathbb{R}^{n}$ and given problem parameters: symmetric matrices $A_{i} \in S^{n}$, vectors $\pmb{b}_{i} \in \mathbb{R}^{n}$ and $c_{i} \in \mathbb{R}$. The problem (P) is a general case of the so-called Quadratically Constrained Quadratic Programming (QCQP).

To find a solution, first note that the objective function $f$ can always be shifted by a constant without changing solution to the optimization problem. So assume $c_{0} = 0$. Furthermore, the problem can be homogenized by adding an auxiliary variable $x_{n+1}$ to get an equivalent problem: \begin{align*} & \text{minimize} \ \pmb{x}^T A_{0} \pmb{x} + 2 x_{n+1} \pmb{b}_{0}^T \pmb{x} \\ & \text{subject to: } \tag{HP}\\ & \qquad \pmb{x}^T A_{i} \pmb{x} + 2 \pmb{b}_{i}^T \pmb{x} \leq c_{i} \quad \forall i = 1, \dots,m \\ & \qquad x_{n+1}^{2} = 1 \end{align*}

This problem is in general NP-hard. One way to find an approximate solution in polynomial time is by using Semidefinite Programming (SDP). The SDP relaxation can be written as: \begin{align*} \DeclareMathOperator{\dotp}{{ \bullet }} & \text{minimize} \ \begin{pmatrix} A_0 & \pmb{b_0} \\ \pmb{b_0}^T & 0 \end{pmatrix} \dotp X \\ & \text{subject to: } \tag{SDP}\\ & \qquad \begin{pmatrix} A_i & \pmb{b_i} \\ \pmb{b_i}^T & 0 \end{pmatrix} \dotp X \leq c_{i} \quad \forall i = 1, \dots, m \\[.5em] & \qquad \begin{pmatrix} \pmb{0} & \pmb{0} \\ \pmb{0} & 1 \end{pmatrix} \dotp X = 1 \\[.5em] & \qquad \, X \succeq 0 \end{align*} where $X \in S^{n+1}$. Note that this problem is no longer quadratic but linear in $X$. This makes it computationally tractable.

To solve the nonlinear programs with a single equality constraint $g(\pmb{x}) = c_{1}$, as you specified in the problem statement:

  • When objective function is convex ($A_{0} \succeq 0$), the equality constraint can be replaced by a single inequality constraint $g(\pmb{x}) \leq c_{1}$. In [2] (Appendix B.1: Single constraint quadratic optimization) is described how the SDP relaxation produces the optimal value in this special case with a single constraint even when objective and constraint are both non-convex.
  • In case when objective is not convex, the equality has to be replaced by two quadratic inequalities. But also in this specific case, they show in [3] that the corresponding SDP relaxation admits no gap with the true optimal value.

For the example that you provided, the objective function is convex ($A_{0} \succeq 0$), and thus the equality constraint can be replaced by a single inequality constraint $g(\pmb{x}) \leq c_{1}$. Plugging in the values, we get: \begin{align*} A_{0} &= \begin{pmatrix} 1 & 1 \\ 1 & 3 \end{pmatrix}, \quad \pmb{b_{0}} = (1, -1/2)^{T}, \\[.5em] A_{1} &= \begin{pmatrix} 3 & 1/2 \\ 1/2 & 2 \end{pmatrix}, \quad \pmb{b_{1}} = (1, 5/2)^{T}, \quad c_{1} = 10 \end{align*}

A step-by-step solution how to solve this SDP relaxation in CVXPY, where operator @ denotes matrix-matrix multiplication and operator >> denotes matrix inequality:

import numpy as np
import cvxpy as cp

A = np.array([[3, 1/2, 1], [1/2, 2, 5/2], [1, 5/2, 0]])
B = np.array([[0, 0, 0], [0, 0, 0], [0, 0, 1]])
C = np.array([[1, 1, 1], [1, 3, -1/2], [1, -1/2, 0]])
X = cp.Variable((3,3), symmetric=True)
constraints = [cp.trace(A @ X) <= 10]
constraints += [cp.trace(B @ X) == 1]
constraints += [X >> 0]
prob = cp.Problem(cp.Minimize(cp.trace(C @ X)), constraints)
prob.solve()

To construct the vector solution $\pmb{x}$, decompose solution $X = \pmb{x}^T \pmb{x}$ and transform back from homogeneous coordinates $\pmb{x} \rightarrow \pmb{x}/x_{n+1}$:

from scipy.linalg import sqrtm

x = sqrtm(X.value).real
x = x[0][:2]/x[0][2]
x

We get array([-1.75002478, 0.74990864]) which is very close to the exact solution $\pmb{x^*} = (-7/4, 3/4)^T$. Similarly, to find the maximizer, use cp.Maximize() instead of cp.Minimize(). Both solutions are shown in the Figure 1 below:

Example

1: https://i.stack.imgur.com/X344i.png

[2]: S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge university press, 2004. https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf

[3]: Ye, Y., Zhang, S.: New results on quadratic minimization. SIAM Journal on Optimization 14(1), 245–267 (2003) https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.23.329&rep=rep1&type=pdf