Standard Form Linear Programming Problem (Prove or disprove)

1.2k Views Asked by At

$\mathbf{Q}$: We consider a standard form linear programming problem, where $\mathbf{A} \in {\mathbb{R}^{mxn}, \mathbf{c} \in \mathbb{R^{n}}, \mathbf{b} \in \mathbb{R^{m}} }$ and the decision variable $\mathbf{x}$ is in $\mathbb{R}^n$. Prove or disprove:

If there are several optimal solutions, then the set of optimal solutions lies in the convex hull of all the basic feasible solutions.

$\mathbf{My }$ $\mathbf{answer}$:True.

I chose this answer as it was the most intuitive answer and thought of proving by contradiction, but unfortunately I am unsure of how I can go about proving this. Some help will be deeply appreciated.

2

There are 2 best solutions below

3
On BEST ANSWER

You haven't told us what your standard form is. I'll assume that it's

$\min c^{T}x$

$Ax=b$

$x \geq 0$

This is false, for the simple reason that you can have an unbounded set of optimal solutions that cannot be the convex hull of a finite set of points.

For example, try

$\min x_{1}-x_{2}$

$x_{1}-x_{2}=0$

$x \geq 0$

Here, the only BFS is $x=0$, but there are many other optimal solutions such as $x_{1}=1$, $x_{2}=1$.

7
On

For the linear programming problem

$${\begin{aligned}&{\text{maximize}}&&\mathbf {c} ^{\mathrm {T} }\mathbf {x} \\&{\text{subject to}}&&A\mathbf {x} \leq \mathbf {b} \\&{\text{and}}&&\mathbf {x} \geq \mathbf {0} \end{aligned}}$$ if we have feasible solutions $\mathbf {x}_1$ and $\mathbf {x}_2$ providing the maximum of $\mathbf {c} ^{\mathrm {T} }\mathbf {x} $, then a convex combination $\lambda \mathbf{x}_1 + (1 - \lambda)\mathbf{x}_2$ is also feasible (because the constraints are linear). Moreover, as soon as $\mathbf {c} ^{\mathrm {T} }\mathbf {x}_1 = \mathbf {c} ^{\mathrm {T} }\mathbf {x}_2$ we have

$$\mathbf {c} ^{\mathrm {T} }\left(\lambda \mathbf{x}_1 + (1 - \lambda)\mathbf{x}_2\right) = \mathbf {c} ^{\mathrm {T} }\mathbf {x}_1 = \mathbf {c} ^{\mathrm {T} }\mathbf {x}_2.$$ So the vector $\lambda \mathbf{x}_1 + (1 - \lambda)\mathbf{x}_2$ solves the problem as well.

These reasons are extended straightforwardly to the case of many basic solutions $\mathbf {x}_1,\dots\mathbf {x}_n$.