Does this special case of convex quadratic programming have a partially-unique solution?

103 Views Asked by At

I know that a strictly convex quadratic programming problem has a unique solution, but I'm curious about the following situation:

If $Q$ is positive definite, does the following problem: $$\min\limits_{x\in\mathbb{R}^n,y\in\mathbb{R}^m }\{x^TQx+{c^1}^Tx+{c^2}^Ty\ \vert\ A^1x+A^2y\le a,\ B^1x+B^2y= b\}$$ has a unique solution $x$? (when there is at leats one solution)

This problem is convex, so (if exists) the solution $(x,y)$ might not be unique, but since $Q$ is positive definite, it seems like $x$ alone should be unique.

2

There are 2 best solutions below

2
On

Yes, it is.

The problem can be solved, thoretically, by first partially minimizing over $y$, obtaining a convex function of $x$ only, and then solving for $x$.

After partially minimizing over $y$, you obtain a strictly convex function of $x$ whose domain is convex, and thus it has a unique solution (given that the problem has any solution, of course).

Edit To clarify further, the optimal $x$ can be obtained by solving $$ \begin{gathered} \min_{x} \left\{ \min_{y} \left\{ x^T Q x + {c^1}^T x + {c^2}^T y : A^1 x + A^2 y \leq a, B^1 x = B^2 y = b \right\} \right\} \\ =\min_{x} \Biggl\{ x^T Q x + {c^1}^T x + \underbrace{ \min_y \{ {c^2}^T y : A^1 x + A^2 y \leq a, B^1 x = B^2 y = b \} }_{\phi(x)} \Biggr\} \end{gathered} $$

The function $\phi(x)$ is a convex extended-valued function of $x$. Therefore, the problem of minimizing over $x$ is a minimization of a strictly convex function over a convex set, and thus if the minimum exists, the minimizer is unique.

0
On

The problem defined above is a particular case of the more general convex optimization problem

$$\begin{equation} \min_{(x,y)\in \Omega}h(x,y):=f(x)+g(y)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1) \end{equation}$$ where $f(x)$ is a strictly convex function of $x$, $g(x)$ is a convex function on $y$ and $\Omega$ is a convex set.

Now suppose that $(x^1,y^1)$ and $(x^2,y^2)$ are optimal solutions for $(1)$ with $x^1\ne x^2$ and $h(x^1,y^1)=h(x^2,y^2)=H$.

Let $\lambda\in(0,1)$, then: $$h(\lambda(x^1,y^1)+(1-\lambda)(x^2,y^2))=h(\lambda x^1+(1-\lambda) x^2,\lambda y^1+(1-\lambda) y^2)\\=f(\lambda x^1+(1-\lambda) x^2)+g(\lambda y^1+(1-\lambda) y^2)\\<\lambda f( x^1)+(1-\lambda)f( x^2)+\lambda g( y^1)+(1-\lambda) g(y^2)\\ =\lambda h(x^1,y^1)+(1-\lambda)h(x^2,y^2)=H$$

Then, for any $\lambda \in(0,1)$ the vector $\lambda(x^1,y^1)+(1-\lambda)(x^2,y^2)\in\Omega$ has a value smaller than $H$, which is not possible since $H$ is the minimum.

Thus, the assumption is false and thus, the vector $x^1=x^2$ which prove that any optimal solution $(x^*,y^*)$ of $(1)$ hast the same value of $x^*$.