Solving a constrained optimization problem

314 Views Asked by At

I have the following problem to optimize:

\begin{align} \begin{split} \hat{\mathbf{x}}, \hat{\delta} = \underset{\mathbf{x}, \,\delta}{argmin} \, \sum_{i=1}^l \, \sum_{j=1}^m \, a_{i,j}\left(x_i + \delta \right)^2 \\ s.t. \, \, \, \, 1 - x_i \leq 0 \, \, \forall \, i \\ \text{and} \,\, \delta \leq 0 \end{split} \end{align} where $a_{i,j}$ is just a variable that doesn't depend of $x_i$.

I am thinking to solve this problem by minimizing the following Lagrangian function: \begin{align} \underset{\mathbf{x}, \, \delta}{argmin} \, L\left(\mathbf{x}, \, \delta, \, \lambda_1, \, \lambda_2\right) = \sum_{j=1}^m \, \sum_{i=1}^l \, a_{i,j}\left(x_i + \delta \right)^2 + \lambda_1 \left(1 - x_i\right) +\lambda_2 \, \delta \end{align} and then find the derive =0.

I am sure I am wrong. Can someone provide me with brief mathematical details about how this problem can be solved?

Any help will be very appreciated.

1

There are 1 best solutions below

14
On BEST ANSWER

I assume that $a_{ij}$ are nonnegative. Define $q_i = \sum_j a_{ij}$, $p_i = q_i / \sum_i q_i$ and $c = -\delta$, then the problem is:

$$\min_{x,c} \left\{ \sum_i p_i(x_i - c)^2 : x_i \geq 1, c \geq 0 \right\}$$ You recognize the formula for the variance in the objective function. Let me write the problem as: $$\min_x \min_c \left\{ \sum_i p_i(x_i - c)^2 : x_i \geq 1, c \geq 0 \right\},$$ For fixed $x$, the inner problem is $\min_c \left\{ \sum_i p_i(x_i - c)^2 : c \geq 0 \right\}$. This is the minimum of a convex quadratic function with a nonnegativity constraint. The unconstrained minimum of the quadratic function is at $c=\sum_i p_i x_i$. This solution happens to satisfy the constraint $c\geq 0$ (because $x_i \geq 0$), so this solution is also optimal to the constrained problem.

Now you can write the original problem as: $$\min_x \left\{ \sum_i p_i(x_i - \sum_j p_j x_j)^2 : x_i \geq 1\right\},$$ or as $$\min_x \left\{ x^T(\textrm{Diag}(p) - pp^T)x : x_i \geq 1\right\},$$ where $\textrm{Diag}(p)$ is a matrix with $p$ on the diagonal and zeros elsewhere. If all $x_i$ are equal, the objective value is $0$. It is clear that you cannot get a value less than $0$, so $0$ is optimal.