The definition of a convex optimization problem

1.1k Views Asked by At

I am trying to understand whether the following problem is a convex optimization problem:

\begin{align} \min 1 \\ s.t. \\ & -x \le 0 \\ & -y \le 0 \\ & -xy \le 0, \end{align}

The objective is constant, hence it is a convex function. $-x$ and $-y$ are linear hence they are convex functions. However, $-xy$ is neither convex nor concave.

According to Boyd's book on convex optimization, the definition of a convex optimization (Equation (1.8) in the book) requires that the objective and all functions above on the lhs of each inequality will all be convex. So it appears that the above is not a convex optimization.

However, it seems to me that the feasible domain of the above problem is: \begin{align} & x \ge 0 \\ & y \ge 0, \\ \end{align} which is a convex domain, hence intuitively, this should be a convex problem as it minimizes a convex objective over a convex domain.

2

There are 2 best solutions below

5
On

This is a matter of definitions. Let us consider the problem \begin{align*} \text{Minimize} \quad & f(x) \\ \text{such that} \quad & g(x) \le 0. \end{align*} Here, $f : \mathbb{R}^n \to \mathbb{R}$, $g : \mathbb{R}^n \to \mathbb{R}^m$ and the inequality constraint $g(x) \le 0$ is considered coefficient-wise.

The feasible set is \begin{equation*} \Omega = \{ x \in \mathbb{R}^n \mid g(x) \le 0\}. \end{equation*}

Now, there are two notions of convexity for the above problem:

  • $f$ and all components of $g$ are convex.
  • $f$ and $\Omega$ are convex.

It is easy to see that the first notion implies the second one but note vice-versa (see your example).

However, it is more reasonable to use the first definition, since we are working explicitly with the functions $g$.

0
On

I must gently disagree with @Michael's first comment. There would certainly be a significant shift in what we could say about convex optimization problems if we relax the definition as the OP suggests we might: we suddenly would not be able to claim that convex optimization problems, as a class, are tractable.

The most important reason we study convex models is that, unlike the larger class of mathematical models, they admit a variety of tractable numerical methods for solving them. And if I'm going to get beyond a theoretical paper to the actual solution of a given class of problems, I'm going to need describe them in a computer-friendly manner. For instance, I might write code to compute the values and derivatives of the objective and constraint functions. I might express the problem in conic form and use a barrier or scaling point method. Or I could rely on a cutting plane oracle or barrier function to describe the constraint set.

But if I were start to allow any sort of constraint combination you wish, then the likelihood that I will find such a computational description drops considerably. Consider the constraint $x^2 \geq 1$---it has no cutting plane oracle, and its barrier method is not convex; heck, the set it describes isn't even connected. But wait, if I also happen to know that $x\geq 0$, then I still have a convex problem, right? Well, sure, but of course the first thing I will do is transform my problem into a solvable form by replacing $x^2\geq 1$ with $x\geq 1$. I'm back to my comfortable Boyd-convex domain.

Sure, there are going to be some examples where a "non-convex" description of the problem still admits a tractable solution method. But hopefully we would agree that exceptional cases do not provide a useful foundation on which to build an entire practice.