an artificial vector in the big M method that produces a BFS

151 Views Asked by At

Suppose we have LP maximization where the constraint are put in the form $I {\bf x_B} + B^{-1} N {\bf x_N} = {\bf \overline{b} } $ where : ${\bf \overline{b }} \ngeq 0 $

If we add just one artificial variable, say $s$ with activity vector ${\bf \hat{b} } $ so that ${\bf \hat{b} } \leq {\bf \overline{b} } $, can we then obtain a basic feasible solution there?

Attempt:

Assume ${\bf \overline{b} } < 0$. that is $\overline{b_k} < 0$ fo all $k=1,...,m$ and put $Y = B^{-1} N $ If we use the big-M method then we have the objective function $z = {\bf c x} - M s $ and our tableau looks like

\begin{matrix} z & x_B & x_N & s & RHS \\ \hline 1 & 0 & - c_N & M & z_0\\ 0 & Id & Y& {\bf e_i}&{\bf \overline{b}} \\ \end{matrix}

where ${\bf e_i}$ has a 1 in the ith position. Now, we must clear the $M$ on the top of the tableau. We have

\begin{matrix} z & x_B & x_N & s & RHS \\ \hline 1 & -M & - c_N & 0 & z_0 - M \overline{b_i}\\ 0 & Id & Y& {\bf e_i}&{\bf \overline{b}} \\ \end{matrix}

and in the $x_N$ columnn, the component $i$ of the vector $-C_N$ also transforms in $c_i-M y_{ji}$ and the where $y_{ji}$ is the ith entry of the column ${\bf y_j} = (y_{j1},y_{j2},...,y_{jm} )^T$.

Here is where I get stuck since I dont really understand what they mean by an activity vector. Am I on the right track?

Update: (10/07)

Suppose It is possible to put our constraints of a linear program to the form $I {\bf x_B} + B^{-1} N {\bf x_N} = {\bf \overline{b}}$ and assume ${\bf \overline{b}} \ngeq 0 $. If we write ${\bf \overline{b}} = ( \overline{b_1}, \overline{b_2}, ..., \overline{b_m})^T$, then we are given that there is at least one index $r$ so that $\overline{b_r} < 0$. Now, add artificial variable $s$ with activity vector

$${\bf \hat{b}} = \begin{bmatrix} \beta_1 \\ \beta_2 \\ \vdots \\ \beta_r \\ \vdots \\ \beta_m \\ \end{bmatrix} $$

and choose them in such a way that $\beta_i \leq \overline{b_i}$ for all $i=1,2,...,m$ and put $\beta_r = \overline{b_r}$ which trivially satisfies the above. Now, consider the rth row on our tableau:

$$ \begin{bmatrix} 0 & 0 & \dots & 1 & \dots & 0 & {\bf y_r} & \beta_r & $\overline{b_r}$\\ \end{bmatrix}$$

where ${\bf y_r}$ is the rth row of the matrix $B^{-1}N$. By our choice of $\beta_r$, if divide this row by $\beta_r$, we obtain

$$ \begin{bmatrix} 0 & 0 & \dots & \dfrac{1}{\beta_r} & \dots & 0 & \dfrac{1}{\beta_r} {\bf y_r} & 1 & 1\\[2ex] \end{bmatrix}$$

We now perform pivoting to make the activity vector of the artificial variable a canonical vector. In fact, the right-hand side vector now becomes

$$ \begin{bmatrix} \overline{b_1} - \beta_1 \\ \overline{b_2} - \beta_2 \\ \vdots \\ 1 \\ \vdots \\ \overline{b_m} - \beta_m \\ \end{bmatrix} $$

and our hypothesis implies that every entry in the preceding vector must be greater or equal than zero, meaning it is a BFS, as desired.