Are all optimization problems convex?

583 Views Asked by At

Recently, a professor at my university made the claim that all optimization problems are convex (after a suitable transformation, if necessary). He is renowned for many important contributions to convex optimization, so I assume that his claim is correct. I would like to understand why this is true and if one can always explicitly construct such a transformation. A particular example that convinced me to believe that the claim is true is moments-based optimization (cf. this paper which you find online via google). The idea is that finite-dimensional polynomial (non-convex) optimization problems are equivalent to infinite-dimensional (convex) optimization problems over measures.

I would like to know to what extend this idea can be generalized. Is this possible for general (non-convex) finite-dimensional nonlinear programs? If so, will the convex counterpart always be infinite-dimensional? And can we also transform non-convex infinite-dimensional optimization problems to convex ones?

PS: I am aware of the fact that the above-mentioned claim would not mean that non-convex optimization problems were suddenly easier to solve. I am interested in this for purely theoretical reasons.

Edit: As said above, Lasserre's paper uses the fact that non-convex polynomial optimization problems are equivalent to convex optimization problems over measures. The (short) proof of this fact, however, makes no use of the polynomial nature of the objective function. I believe this can be proven for any (measurable/continuous?) objective function. This would mean that any finite-dimensional non-convex optimization problem is equivalent to a convex optimization problem over measures. Have I overlooked something?

1

There are 1 best solutions below

20
On

Suppose that your optimization problem is

$(P1) \quad \min_{x} f(x)$ such that $x \in \Omega \qquad$,

where $f:\mathbb{R}^n \rightarrow \mathbb{R}$ is a continuous real-valued function and $\Omega \subset \mathbb{R}^n$ is a compact set. The assumptions of continuity of $f$ and compactness of $\Omega$ are made to guarantee that a minimum exists (see Weiertrass' theorem).

This generic optimization problem attains the same optimal value of the convex optimization problem

$(P2) \quad \min_{x,\alpha} \alpha$ such that $(x,\alpha) \in \textrm{conv}(\{x\in \Omega, f(x) \le \alpha\})$,

where $\textrm{conv}(S)$ is the convex hull of the set S.

Remark 1: As pointed out in the comments, although (P2) has the same optimal value of (P1), it may be the case that the optimal points are different.

Remark 2: some authors define differently what is a convex optimization problem, precisely,

$\min_x f(x)$ such that $ g_i(x) \le 0, i=1,\ldots,m$ and $h_j(x) = 0, j=1,\ldots,k$, where $f, g_i$ are convex functions and $h_j$ are affine functions.

For more details, see Amir Ali Ahmadi's lecture notes on convex and conic optimization, in particular, pages 13 and 14 of lecture 4 from 2016.