Why do we need specific linear programming methods now?
The simplex is part of the active-set methods: it "guesses" the set of active constraints (= a vertex of the feasible set), then refines this guess over time. At each iteration, there's usually a single constraint updated, so it might take a while to converge.
Gradient projection allows you to modify the active set more aggressively.
Interior-point methods somehow move through the interior of the feasible set (that is, not from vertex to vertex).
Depending on your problem (number of inequality constraints, warmstart or not, ...), one of the methods may prevail over the others.
By "Lagrange multiplier method", do you mean solving the optimality (KKT) conditions? In the presence of inequality constraints, you're still left with a lot of decisions to make ("is this constraint active or inactive at the solution?")... which is essentially what active-set methods do :)