Help in understanding proof of lexicographic rule's role in terminating the simplex method

306 Views Asked by At

Theorem: The simplex method terminates as long as the leaving variable is selected by the lexicographic rule in each iteration.

I am reading through the proof of this theorem and understand all but one piece.

Considering an arbitrary row of a basic variable in terms of non-basic variables and a linear combination of tiny epsilons, we have

$$ x_{k}=(r_{0}+r_{1}\epsilon_{1}+...+r_{m}\epsilon_{m})-\sum_{j \notin B}d_{j}x_{j} $$

The idea is to prove at least one of the $r$s is not zero.

Writing $d_{k}=1$ and $d_{i}=0$ for all basic variables $x_{i}$ distinct from $x_{k}$, we have

$$ \sum_{j=1}^{n+m}d_{j}x_{j}=r_{0}+\sum_{j=1}^{m}r_{i}\epsilon_{i} $$

Since this equation has been obtained by algebraic manipulations from the definitions of the slack variables,

$$ x_{n+i}=b_{i}+\epsilon_{i}-\sum_{j=1}^{n}a_{ij}x_{j}\space (i=1,2,...,m) $$

Substituting, we get

$$ \sum_{j=1}^{n}d_{j}x_{j}+\sum_{i=1}^{m}d_{n+i}(b_{i}+\epsilon_{i}-\sum_{j=1}^{n}a_{ij}x_{j})=r_{0}+\sum_{i=1}^{m}r_{i}\epsilon_{i} $$

Rearranging the above we get

$$ \sum_{j=1}^{n}(d_{j}-\sum_{i=1}^{m}d_{n+i}a_{ij})x_{j}+\sum_{i=1}^{m}(d_{n+i}-r_{i})\epsilon_{i}=r_{0}-\sum_{i=1}^{m}d_{n+i}b_{i} $$

Now the text says

we observe that the coefficient at each $x_{j}$, the coefficient at each $\epsilon_{i}$, and the right-hand side must equal zero.

My question is: why must these things equal zero? I am omitting the rest of the proof because I understand it. I just can't get my head around this one statement.

The theorem and its proof can be found on books.google.com under Theorem 3.2.

Possible answer: Is it because the last identity must hold true for each and every selection of $ x_{i} $ and $ \epsilon_{i} $ (with each $ x_{i} $ and $ \epsilon_{i} $ > 0) and the only way that can be guaranteed is if the coefficients on the LHS are each zero, rendering the RHS also zero?

1

There are 1 best solutions below

4
On

If at least one of the $r$s is not zero, it's shown that the row is not degenerate, which is what you want. Now, I think since the proof is by contradiction, in the last equation, it's supposed that actually all those coefficients are zero and we have a degenerate row. What happens? If that's true in your last equation, all your $d_i$s will be zero which is in contradiction with the assumption that $d_k=1$. I hope this helps in clarifying the proof.