How the dual LP solves the primal LP

25k Views Asked by At

When I heard someone discussing LP the other day, I heard him say, "Well, we could just solve the dual."

I know that both the primal LP and its dual must have the same optimal objective value (assuming both are feasible and bounded). I also understand complementary slackness (the product of all primal variables and dual slack variables is 0, as is the product of all dual variables and primal slack variables).

To me, solving the dual gives you some useful information about the solution of the primal:

  1. The final objective value, which restricts you to an $n-1$ hyperplane
  2. All nonzero dual slack variables require primal variables of 0.

But aside from this information, to me it doesn't seem that solving the dual truly solves the primal LP. Knowing the optimal objective value can help (given this, simply find the primal feasible point with that objective value), as can knowing which primal variables are 0. But the latter is LP-specific: if the dual problem has many zeroes in the solution, then there isn't information about the primal variables.

My question is this: when people say "We'll solve the dual," does that mean it actually solves the primal or that it simply gives useful information that can help to faster solve the primal?

Thanks for your help, none of my colleagues could answer.

EDIT: My main question is equivalent to "How can we prove there are enough equations to determine all variables?" Please see comment to answer below.

1

There are 1 best solutions below

1
On

There are two aspects of this.

  1. If you use the simplex method or some variant of it, you are actually simultaneously solving the primal and dual. That is, from an optimal simplex tableau you can read off both an optimal solution to the primal and an optimal solution to the dual.
  2. From an optimal solution of either primal or dual, complementary slackness reduces the other one (at least in nondegenerate cases) to a relatively simple matter of solving a system of $m$ linear equations in $m$ unknowns.

EDIT: Here's a typical example. Consider the (primal) problem P:

$$ \eqalign{\text{maximize } & 2 x_1 +16 x_2 +2 x_3 \cr \text{subject to} &\cr & 2 x_1 + x_2 - x_3 \le -3 \cr & -3 x_1 + x_2 + 2 x_3 \le 12 \cr & x_1, x_2, x_3 \ge 0 }$$

and suppose you know that $x_1 = 0$, $x_2 = 2$, $x_3 = 5$ (and thus slack variables $\xi_1 = 0$, $\xi_2 = 0$) is an optimal solution. The dual problem (with decision variables $y_1, y_2$ and slack variables $\eta_1, \eta_2, \eta_3$) has equations

$$ \eqalign{ 2 y_1 - 3 y_2 - \eta_1 &= 2\cr y_1 + y_2 - \eta_2 &= 16\cr -y_1 + 2 y_2 - \eta_3 &= 2\cr}$$ But complementary slackness tells you $\eta_2 = 0$ and $\eta_3 = 0$. Putting these in and solving the second and third equations $$ \eqalign{ y_1 + y_2 &= 16\cr -y_1 + 2 y_2 &= 2\cr}$$ you get $y_1 = 10$, $y_2 = 6$, and then in the first equation $\eta_1 = 0$.