Equivalence and convexity in optimization problems with restrictions

294 Views Asked by At

Let $\mathcal{X} = \{x_1,\dots,x_N\}$ be a finite set in $\mathbb{R}^d$ and let $S$ and $D$ be two subsets of $\mathcal{X}\times \mathcal{X}$, with $S \cap D = \emptyset$. If $M$ is a positive semidefinite matrix of dimension $d$, we denote it $M \succeq 0$, and the norm associated to $M$, $\|x\|_M := \sqrt{x^TMx}$ (assuming that vectors are represented as column matrices).

I'm studying the following optimization problem:

\begin{align} \min_M &\quad \sum_{(x_i,x_j) \in S} \|x_i-x_j\|_M^2 \\ \text{s.t.: } &\quad \sum_{(x_i,x_j)\in D}\|x_i-x_j\|_M \ge 1 \\ &\quad M \succeq 0. \end{align}

It's my first time working with this kind of problems and I think I don't know many of the tools to approach (formally) this problem, so I'm having many doubts. I would like to know, if possible, how are the techniques to solve the questions I'm going to ask, in order to be able to apply them to new similar problems.

1- My first question is about convexity. I have read that this problem is convex, but I'm not used to the concept of convex problem. I'm supposing that this means that the objective function and all the restrictions are convex. If this is true, which are the techniques to prove the convexity of each item? At the moment, the only tool I have is the definition. Also I would like to know if there is any way to detect (intuitively) the convexity of a problem.

Finally, also about convexity, in which cases a convex problem guarantees the existence and uniqueness of a minimum? Because not all convex functions reach a minimum, so I would like to be able to rigorously show this too.

2- My second question is about the equivalence of this kind of problems. I have read that the previous problem is equivalent to the following one:

\begin{align} \max_M &\quad \sum_{(x_i,x_j) \in D} \|x_i-x_j\|_M \\ \text{s.t.: } &\quad \sum_{(x_i,x_j)\in S}\|x_i-x_j\|_M^2 \le 1 \\ &\quad M \succeq 0. \end{align}

First of all, in which sense are the two problems equivalent? Where I read this there wasn't any description of the 'equivalent' concept. Can that mean that both problems reach their minimum/maximum at the same point, or at proportional points, or just there is a correspondence between the solutions of both problems? And once the definition is cleared up, which are the ways to prove that equivalence?

Thanks in advance.

1

There are 1 best solutions below

0
On
  1. a convex optimization requires the functions themselves to be convex (not just the sets defined by those functions). The functions are $$f_0(M) = \sum_{(x_i,x_j) \in S} x_i^T M x_j \text{, and } f_1(M) = 1 - \sum_{(x_i,x_j) \in D} \sqrt{ x_i^T M x_j}.$$ The first function is linear and hence convex. For the second function, the negative square root is convex, sums of convex functions are convex, and convex functions of linear functions are also convex. A solution is guaranteed unique if the objective is strictly convex (which yours is not). However, all solutions have the same value (no local optima). Your solution set is closed, so if an infimum exists, it is attained. Your problem appears feasible (as long as there is an $(x_i,x_j) \in D$ such that $x_i-x_j \neq 0$) and is bounded below by 0, so an optimal solution exists.

  2. you could look at your problem as finding a trade-off between $\sum_{(x_i,x_j) \in S} \|x_i-x_j\|_M^2$ and $\sum_{(x_i,x_j)\in D}\|x_i-x_j\|_M$. Both problems find a point on the pareto curve.