Linear Programming - Motivation behind the Dual Simplex Method

209 Views Asked by At

I am trying to understand the motivation behind the Dual Simplex Method. However, I have run into some roadblocks while understanding the rationale behind the Dual Simplex Method. This is my current understanding of the Simplex, the Primal and Dual problem:

$1$. For a minimization problem, the Simplex Algorithm proceeds with first a basic feasible solution; then it replaces individual basis columns with an external column until $c_j - C_B B^{-1} A_j >0~\forall~j$ where $c_j$ is the $jth$ cost tuple ; $C_B$ is the cost corresponding to the feasible basis $B$ and $A_j$ is the $jth$ column external to $B$.

$2$. If $x_0$ be the primal feasible solution and $y_0$ be the dual feasible solution and both satisfy the complementary slackness conditions, then $x_0$ is the primal optimal solution and $y_0$ is the dual optimal solution.

$3$. If $c^T x_0 = b^T y_0$ where $B$ is the feasible basis, then $x_0$ is the primal optimal solution and $y_0$ forms the dual optimal solution.

Using this, my professor has tried to implement the Dual Simplex Algorithm by first accounting for a tuple of the RHS $:b_r < 0$ and then proceeding ahead.

However, I do not quite understand the need to consider $b_r < 0$ nor the algorithm ahead. My confusion is: - The simplex method is applied to the dual formulation without explicitly finding the dual. How is that done?Could someone help me build the dual simplex algorithm from here?

1

There are 1 best solutions below

2
On

After adding the slack variable, consider a standard-form linear programming problem: $$ \begin{align*} &\text{max} \quad &&c^Tx \\ &\text{subject to}\quad &&Ax = b \\ &&& x\geq 0 \end{align*} $$ where the matrix $A$ is a partitioned-matrix form of $A = [B\ \ N]$. We can write $$ x = \begin{pmatrix} x_1 \\ \vdots \\ x_n \\ w_1 = x_{n+1} \\ \vdots \\ w_m = x_{n+m} \end{pmatrix} \quad \text{and}\quad z = \begin{pmatrix} z_1 \\ \vdots \\ z_n \\ y_1 = z_{n+1} \\ \vdots \\ y_m = z_{n+m} \end{pmatrix} $$ where $x_i$ are the primal variables, $w_i$ are the primal slacks, $z_i$ are the dual slacks, and $y_i$ are the dual variables. Let the subscript $N$ indicate the non-basis, and $B$ indicate the basis. Now the primal dictionary updates as: $$ \begin{align*} \zeta &= \bar{\zeta} - z_N^Tx_N \\ x_B &= \bar{x}_B - B^{-1}Nx_N \end{align*} $$ and the dual dictionary actually updates as $$ \begin{align*} -\zeta &= -\bar{\zeta} - \bar{x}_B^Tz_B \\ z_N &= \bar{z}_N + (B^{-1}N)^Tz_B \end{align*} $$ Notice that here $(B^{-1}N)^T$ is the negative transpose of $-B^{-1}N$.

Take a look at https://vanderbei.princeton.edu/542/lectures/lec6.pdf or Chapter 6 of the book Linear Programming by Vanderbei for a detailed explanation of the negative transpose property.