Proof of 100 % rule in Linear Programming

2.6k Views Asked by At

I am a beginner student in Linear programming. I got introduced to 100 % rule, where we can comment whether the basis is going to change for the optimal solution, when there is a simultaneous increase/decrease in RHS of the constraints or the coefficients in the objective function.

Can someone hint at how to prove this 100% rule?

Edit: I found a pdf containing the formal proof on Pg 19. On reading this pdf, What I can understand is that the variations weighted solution is a feasible solution. Why should it be the optimal - I cannot figure it out.

1

There are 1 best solutions below

2
On

When the rhs is changed, we have 2 state : 1- all constraints whose rhs are being changed are nonbinding constraint so the current basis remain optimal off each rhs remains within the allowable range, so the value of decision variable and z remains unchanged.

2- at least one constraint whose rhs is being binding, the 100% rule is helpful.

$b_j$: current rhs of the $j^{th}$ constraint

$\Delta b_j$: change in $b_j$

$I_j$: $allowable increase for constraint j

$D_j$: allowable decrease for constraint j

$$ r_j=\begin{cases} &\frac{\Delta b_j}{I_j}, \phantom{oo} \text{if} \phantom{o} \Delta b_j \geq 0\\ &\frac{-\Delta b_j}{D_j} \phantom{oo} \text{if} \phantom{o}\Delta b_j \leq 0 \end{cases} $$

The current basis remain optimal iff $\sum r_j \leq 1$