Why can one assume without loss of generality that the constraint $\mathbf{b}$ in linear programs is non-negative?

177 Views Asked by At

Problem 9.16 of The Nature of Computation states

Jumpstart for LP. How do we find an initial feasible solution for the simplex algorithm? By solving another linear program for which we know a feasible solution! As we outlined in Section 9.4.1, we can transform the set of inequalities $\mathbf{Ax\leq b}$ into a set of equalities $\mathbf{A'x'= b'}$ by adding "slack" variables $\mathbf{s\geq 0}$. Hence we can assume that our linear program is of the form $$ \max_\mathbf{x}(\mathbf{c}^T\mathbf{x})\qquad\text{subject to}\qquad \mathbf{Ax=b}\qquad\text{and}\qquad \mathbf{x\geq0}\,. $$ By adding more variables, construct yet another linear program that has a trivial feasible solution, and whose optimal solution is a feasible solution of this problem. Hint: if $\mathbf{b}\in\mathbb{R}^m$, you need $m$ additional variables. Without loss of generality you may assume that $\mathbf{b\geq 0}$.

I'm stuck understanding why one can assume that $\mathbf{b\geq 0}$. It seems that this assumption would lead straight to having $\mathbf{x=0}$ available as a trivial solution which is a little too easy.

There have been suggestions to just multiply every row in which $\mathbf{b}$ is negative by -1. I don't think it's that easy. Take the following example:

$$ \max_\mathbf{x}([1]\cdot\mathbf{x})\qquad\text{subject to}\qquad \begin{bmatrix} 1\\ -1 \end{bmatrix} \mathbf{x}\leq \begin{bmatrix} 2\\-1 \end{bmatrix} \qquad\text{and}\qquad \mathbf{x\geq 0} $$ It can easily be seen that this is maximized for $\mathbf{x}=[2]$. The corresponding problem with slack variables $\mathbf{s}$ has the same optimum and we must have $\mathbf{s}=[0\;1]^T$ to achieve equality. Now suppose we multiplied the second row by -1 to ensure $\mathbf{b\geq 0}$. Then $\mathbf{s}$ would have to become $[0\;-1]^T$ to still achieve equality, violating the constraint $\mathbf{s\geq 0}$.

2

There are 2 best solutions below

6
On BEST ANSWER

$$\max\, [1]^T[x] \quad \text{s.t.} \quad \begin{bmatrix}1\\-1\end{bmatrix}[x]\le\begin{bmatrix}2\\-1\end{bmatrix},\, [x]\ge 0$$

Add slack variables:

$$\max\, \begin{bmatrix}1\\0\\0\end{bmatrix}^T\begin{bmatrix}x\\s_1\\s_2\end{bmatrix} \quad \text{s.t.} \quad \begin{bmatrix}1&1&0\\-1&0&1\end{bmatrix}\begin{bmatrix}x\\s_1\\s_2\end{bmatrix}=\begin{bmatrix}2\\-1\end{bmatrix},\, \begin{bmatrix}x\\s_1\\s_2\end{bmatrix}\ge 0$$

Negate rows as necessary:

$$\max\, \begin{bmatrix}1\\0\\0\end{bmatrix}^T\begin{bmatrix}x\\s_1\\s_2\end{bmatrix} \quad \text{s.t.} \quad \begin{bmatrix}1&1&0\\1&0&-1\end{bmatrix}\begin{bmatrix}x\\s_1\\s_2\end{bmatrix}=\begin{bmatrix}2\\1\end{bmatrix},\, \begin{bmatrix}x\\s_1\\s_2\end{bmatrix}\ge 0$$

The optimum is at $x=2,s_1=0,s_2=1$. What's the problem?

0
On

You text you quote suggests to introduce artificial variables (after multiplying rows with -1 to if needed) and to change the system to $Ax+y=b$. The first steps in the simplex method aim at getting y to 0 by using the big-M method or the two phase method. See, e.g., this document for an explanation of the two phase method.