Initial vertex in Simplex algorithm

404 Views Asked by At

Conceptually, the simplex algorithm goes from one vertex of a polytope to the next. However, how is the initial vertex computed ?

If I follow the simplex tableau scheme, it should at some point correspond to the computation of an initial vertex. Can someone explain in more detail, how this happens ?

For example according to https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-251j-introduction-to-mathematical-programming-fall-2009/lecture-notes/MIT6_251JF09_lec05.pdf given the current basis and basis feasible solution $x$, we can obtain a better basis and basis feasible solution by find an index j from the non-basis column set such that the reduced costs improve (descrease). This results in a direction $d$ such that we obtain the next vertex at $x + \theta d$ (if $x$ is not optimal) with $\theta >0$.

However, If I have a look at the Simplex tableau algorithm, I dont have to compute an initial basis (i.e. checking if the corresponding matrix is regular),

How do these two concepts relate?

1

There are 1 best solutions below

3
On

When there is no obvious initial vertex to start from, one common method is to solve another linear program whose solution gives an initial vertex of the linear program that we want to solve. This is called the two phase method.

For example, if we have to solve

$$\begin{align}\max\quad &&c^T &x \\ \text{subject to}\quad &&a_1^T &x \le 1 \\ &&a_2^T &x \ge 1 \\ &&&x \ge 0\end{align}$$

where $c$, $a_1$ and $a_2$ are some given vectors, we'd first solve (written in canonical form, with slack variables $s_1$ and $s_2$, and artificial variable $w_1$):

$$ \begin{alignat}{5} \min\quad && & & & & & & &w_1 \\ \text{subject to} \quad && a_1^T &x& +&s_1& & & & &=1 \\ && a_2^T &x& & & - &s_2& + &w_1&=1 \\ && &x& & & & & & &\ge 0 \\ && & & &s_1& & & & &\ge 0 \\ && & & & & &s_2& & &\ge 0 \\ && & & & & & & &w_1&\ge 0 \\ \end{alignat} $$

This problem has a trivial initial vertex, namely $x=0$, $s_1=1$, $s_2=0$, and $w_1=1$, and can be solved with the simplex method. If the optimal solution to this first phase problem is zero (in which case $w_1$ will not be in the final basis), then the optimal solution of the first phase is a feasible solution for the original problem, and therefore we can use the final basis of the first phase as a starting base for the second phase. If the first phase problem has non-zero solution (in which case $w_1$ will be in the final basis), then the original problem is infeasible.

This is just a small example but all linear programming problems can be treated in this way. The idea is to add an artificial variable for each constraint for which there is no obvious slack variable to put into the basis, and then to minimize the sum of all artificial variables to try to find a feasible solution to the modified problem where all artificial variables are zero.