I am following a course on linear programming, and one of the exercises calls for an example, that may show that a theorem fails, if a assumption is omitted from the theorem.
The theorem is Theorem 5.5 from Vasek Chvatals book Linear Programming. The theorem states that:
If a linear programming problem has at least one non-degenerate basic optimal solution, then there is a positive $\varepsilon$ with the following property: If $|t_i| \leq \varepsilon$ for all $ i = 1, 2, ..., m$, then the problem
$$ \begin{align} \mathrm{maximize } & \sum^{n}_{j=1}c_jx_j \\ \mathrm{subject\ to} & \sum^n_{j=1} a_{ij}x_j \leq b_i + t_i & (i = 1, 2, ..., m) \\ & x_j \geq 0 & (j=1, 2,...,n) \end{align} $$
has an optimal solution and its optimal value equals
$$ z^* + \sum^m_{i=1} y_i^* t_i $$
with $z^*$ standing for the optimal value of the linear programming problem and with $y^*_1, y^*_2, ..., y^*_m $ standing for the optimal solution of its dual.
The hypothesis that should be omitted, is the one stating that the original LP problem has at least one non-degenerate basic optimal solution.
So far, I haven't been able to grok what would happen if the hypothesis is omitted and my search for alternate literature hasn't turned up much.
I assume, the solution is to construct a LP problem with a degenerate solution in the primal and then show that the conclusion doesn't hold. But how do I go about finding an LP problem that shows this?
Or are there smarter ways to solve this problem?
Let $m=n=1$, $a_{11}=1, b_1=-1, c_1=1$. You will find that no small $t_1$ (including $t_1=0$) has any solution to the LP.