How the dual LP solves the primal LP

When I heard someone discussing LP the other day, I heard him say, "Well, we could just solve the dual."

I know that both the primal LP and its dual must have the same optimal objective value (assuming both are feasible and bounded). I also understand complementary slackness (the product of all primal variables and dual slack variables is 0, as is the product of all dual variables and primal slack variables).

To me, solving the dual gives you some useful information about the solution of the primal:

  1. The final objective value, which restricts you to an $n-1$ hyperplane
  2. All nonzero dual slack variables require primal variables of 0.

But aside from this information, to me it doesn't seem that solving the dual truly solves the primal LP. Knowing the optimal objective value can help (given this, simply find the primal feasible point with that objective value), as can knowing which primal variables are 0. But the latter is LP-specific: if the dual problem has many zeroes in the solution, then there isn't information about the primal variables.

My question is this: when people say "We'll solve the dual," does that mean it actually solves the primal or that it simply gives useful information that can help to faster solve the primal?

Thanks for your help, none of my colleagues could answer.

EDIT: My main question is equivalent to "How can we prove there are enough equations to determine all variables?" Please see comment to answer below.


Solution 1:

There are two aspects of this.

  1. If you use the simplex method or some variant of it, you are actually simultaneously solving the primal and dual. That is, from an optimal simplex tableau you can read off both an optimal solution to the primal and an optimal solution to the dual.
  2. From an optimal solution of either primal or dual, complementary slackness reduces the other one (at least in nondegenerate cases) to a relatively simple matter of solving a system of $m$ linear equations in $m$ unknowns.

EDIT: Here's a typical example. Consider the (primal) problem P:

$$ \eqalign{\text{maximize } & 2 x_1 +16 x_2 +2 x_3 \cr \text{subject to} &\cr & 2 x_1 + x_2 - x_3 \le -3 \cr & -3 x_1 + x_2 + 2 x_3 \le 12 \cr & x_1, x_2, x_3 \ge 0 }$$

and suppose you know that $x_1 = 0$, $x_2 = 2$, $x_3 = 5$ (and thus slack variables $\xi_1 = 0$, $\xi_2 = 0$) is an optimal solution. The dual problem (with decision variables $y_1, y_2$ and slack variables $\eta_1, \eta_2, \eta_3$) has equations

$$ \eqalign{ 2 y_1 - 3 y_2 - \eta_1 &= 2\cr y_1 + y_2 - \eta_2 &= 16\cr -y_1 + 2 y_2 - \eta_3 &= 2\cr}$$ But complementary slackness tells you $\eta_2 = 0$ and $\eta_3 = 0$. Putting these in and solving the second and third equations $$ \eqalign{ y_1 + y_2 &= 16\cr -y_1 + 2 y_2 &= 2\cr}$$ you get $y_1 = 10$, $y_2 = 6$, and then in the first equation $\eta_1 = 0$.

Solution 2:

This is wrong, as pointed out by others on this page. See my other answer for a (hopefully) correct one

It's maybe a little too late, but as I had the same doubt, I decided to write an answer for future reference.

The key is to understand that one supposes your solution is a vertex. If it is not one, is not too hard to find a vertex that is optimal starting from your optimal solution.

I shall use the notation Oliver introduced in his comment to the other answer. That is, assume the original problem has $n$ variables with $k$ constraints ($k$ slack variables), where $u\le n$ of those variables are nonzero and $v\le k$ slack variables are nonzero.

Once you have a vertex, you know that you have equalities for at least $n$ of the $k+n$ inequalities (the $k$ "matrix" constraints and the $n$ nonnegativity ones). This is because in an $n$ dimensional space (of the original variables), a point is the intersection of (at least) $n$ hyperplanes.

Those are exactly the $k-v$ slack variables that are zero and $n-u$ variables that are zero, so you have:

$$(k-v)+(n-u)\geq n\Rightarrow k\leq u+v$$

Thus (per Oliver's comment about the dual problem having $k$ variables with $n$ constraints) we can "guarantee the number of unknowns in the dual does not exceed the number of equations in the dual".