Constrained non-convex QP

71 Views Asked by At

Consider the optimization problem:

$$ \begin{aligned} \text{maximize}&\sum_{k=1}^9\left(x_k+\frac{k}{9}\right)^2\\ \text{subject to}&\begin{cases}\bar{x}\succeq \bar{0}\\\sum_{k=1}^9x_k=1.\end{cases} \end{aligned} $$

Where $\bar{x}=(x_1,\dots,x_9)\in\mathbb{R}^9$.

Find $\bar{x}^*$ for which the maximum is attained.

Here is whta I have tried so far

Notice that

$$ (Q) \begin{cases} \max~&\sum_{k=1}^9\left(x_k+\frac{k}{9}\right)^2\\ \text{subject to}~&x_k\ge 0:k\in\{1,\dots,9\}\\ &\sum_{k=1}^9x_k=1, \end{cases} $$

can be rewritten as

$$ (Q) \begin{cases} \min~&-\sum_{k=1}^9\left(x_k+\frac{k}{9}\right)^2\\ \text{subject to}~&-x_k\le 0:k\in\{1,\dots,9\}\\ &\left(\sum_{k=1}^9x_k\right)-1=0. \end{cases} $$

To access the dual problem, first we define the Lagrangian

$$ \begin{aligned} \mathcal{L}(\bar{x},\bar{\lambda},\mu)&=\mathcal{L}(x_1,\dots,x_9,\lambda_1,\dots,\lambda_9,\mu)\\ &=-\sum_{k=1}^9\left(x_k+\frac{k}{9}\right)^2-\sum_{k=1}^9\lambda_kx_k+\mu\left(\left(\sum_{k=1}^9x_k\right)-1\right)\\ &=\sum_{k=1}^9-x_k^2-\frac{2x_kk}{9}-\frac{k^2}{81}-\sum_{k=1}^9\lambda_kx_k+\sum_{k=1}^9\mu x_k-\mu\\ &=\sum_{k=1}^9-x_k^2-\frac{2x_kk}{9}-\frac{k^2}{81}-\lambda_kx_k+\mu x_k-\mu\\ &=\sum_{k=1}^9-x_k^2-\frac{2x_kk}{9}-\lambda_kx_k+\mu x_k-\frac{95}{27}-\mu. \end{aligned} $$

To find the optimal solution, we need to solve the following system of equations:

$$ \begin{aligned} \frac{\partial}{\partial x_j}\mathcal{L}(\bar{x},\bar{\lambda},\mu)&=\frac{\partial}{\partial x_j}\left(\sum_{k=1}^9-x_k^2-\frac{2x_kk}{9}-\lambda_kx_k+\mu x_k\right)=-2x_j-\frac{2j}{9}-\lambda_j+\mu=0,\\ \text{ thus }x_j&=-\frac{j}{9}-\frac{\lambda_j}{2}+\frac{\mu}{2}:j\in\{1,\dots,9\}\text{ and }\\ \sum_{j=1}^9x_j&=\sum_{j=1}^9\left(-\frac{j}{9}-\frac{\lambda_j}{2}+\frac{\mu}{2}\right)=-5-\sum_{j=1}^9\frac{\lambda_j}{2}+\frac{9\mu}{2}=1\text{ hence}\\ \sum_{j=1}^9\lambda_j&=12+9\mu. \end{aligned} $$

How do I proceed form here?, What is the exact solution?Please Help Me

1

There are 1 best solutions below

0
On

Let me organize the comments into an answer.

The optimal solution is $\overline{x} = (0, 0, 0, 0, 0, 0, 0, 0, 1)$. To see this, let's write $f(x_1, \dots, x_9) = \sum_{k = 1}^9 \Big(x_k + \frac{k}{9}\Big)^2$. Then $f(1, 0, \dots, 0) < f(0, 1, \dots, 0) < \dots < f(0, 0, \dots, 1)$. In general, for a feasible $\overline{x} = (x_1, \dots, x_9)$, we have $0 \leq x_k \leq 1$, $x_1 + \dotsb + x_9 = 1$. Note that the objective function $f$ is convex, so \begin{align*} f(x_1, x_2, \dots, x_9) & \leq x_1 f(1, 0, \dots, 0) + x_2 f(0, 1, \dots, 0) + \dotsb + x_9 f(0, 0, \dots, 1)\\ & \leq f(0, 0, \dots, 1), \end{align*} with equality if and only if $x_1 = \dots = x_8 = 0$, $x_9 = 1$.

Remark 1: the optimization problem is nonconvex. The objective function $f(x_1, \dots, x_9)$ is convex, but you are trying to maximize it. In general, in a convex optimization problem, you are minimizing a convex objective function (or maximizing a concave objective function). Due to the nonconvexity, Lagrange duality might not be the best approach. (It works well for convex problems with strong duality, but not for general nonconvex problems.)

Remark 2: You are basically optimizing over the simplex $\Delta = \{\overline{x} = (x_1, \dots, x_9): x_k \geq 0, x_1 + \dotsb + x_9 = 1\}$. The function $f(x_1, \dots, x_9)$ is convex, so its maximum must occur on the boundary (actually, one of the nine vertices of the simplex).