Primal and dual solution to linear programming

There are three things that have to be satisfied in order for a solution to a linear programming problem to be optimal:

  1. The primal solution must be feasible.
  2. The dual solution must be feasible.
  3. Complementary slackness must be satisfied. (Remember that primal variables are paired with dual slack variables and dual variables are paired with primal slack variables. Complementary slackness is the requirement that, for each of these pairs, at least one variable must be zero.)

The primal simplex method (after Phase I) keeps (1) and (3) always true and searches for a solution that satisfies (2). The dual simplex method (again, after Phase I), keeps (2) and (3) always true and searches for a solution that satisfies (1).

The approach you are describing (minus the $b^Ty \geq c^T x$ constraint) is used. It's the other option, in which (1) and (2) are always kept true while the algorithm searches for a solution that satisfies (3). As Yuval Filmus indicates, this is called a primal-dual method or the parametric self-dual simplex method. See, for example, Rader's Deterministic Operations Research, pp. 432-440, or Vanderbei's Linear Programming: Foundations and Extensions, pp 119-121. (See also Vanderbei's text for how to find an initial feasible solution to both problems; i.e., Phase I.) The idea dates back at least to George Dantzig, the inventor of the simplex method.

As a side comment, Vanderbei indicates that the parametric self-dual simplex method is more amenable to probabilistic analysis than the other versions of the simplex method.


I think that you're missing some additional nitpicky conditions here and there to guarantee optimality complementary slackness, but the idea is essentially correct. You should read more about Karush Kuhn Tucker (KKT) conditions in order to see how these equations apply not only to linear programming, but constrained optimization in general.