Unique solution in a quasi-convex optimization problem?

609 Views Asked by At

Let $f:\mathbb{R}^n\mapsto \mathbb{R}$ and $g:\mathbb{R}^m\mapsto \mathbb{R}$ be two differentiable functions and consider the optimization problem $$ \min_{x\in \mathbb{R}^n} f(x) $$ subject to $$ g(x)\leq 0 $$ I am considering two cases here:

  1. $f$ and $g$ are both strictly quasi-convex.
  2. $f$ is linear (but not constant) and $g$ is strictly quasi-convex.

My questions are:

  1. Can we show that under both scenarios there is a unique global minimizer $x$ that solves the optimization problem?
  2. Are the Karush-Kuhn-Tucker conditions sufficient to characterize that global minimizer?

Also, is there a good reference for (strictly) quasi-convex optimization? Many thanks for any help!

1

There are 1 best solutions below

1
On

First, the problem may not have a global minimum. Think about the simple case of $\min x^3 \; \text{s.t.}\; x^3\le 0$.

Second, suppose the problem has a global minimum. Then, it's quite easy to prove that a strictly quasi-convex function $f(x)$ over a nonempty convex feasible region $C$ has a unique minimum: Suppose $x\ne y$ in $C$ are both global minima. Then, $z=\lambda x + (1-\lambda)y\in C$ for any $\lambda\in(0,1)$ and $f(z)<f(x)=f(y)$, which is impossible. Note that differentiability is not needed. And any local minimum is a global minimum.

Third, for a linear objective function over a convex set $C$ defined by a strictly convex function, if the minimum exists, then it must be unique -- which follows from a similar argument as above, except noting that if $x,y\in C$ then $\lambda x + (1-\lambda)y$ is in the interior of $C$.

Fourth, since a strictly quasi-convex function still may have stationary points that are not local/global minima, then KKT conditions probably cannot characterize its global minimizer, if exists.