Please explain the intuition behind the dual problem in optimization.

I've studied convex optimization pretty carefully, but don't feel that I have yet "grokked" the dual problem. Here are some questions I would like to understand more deeply/clearly/simply:

  1. How would somebody think of the dual problem? What thought process would lead someone to consider the dual problem and to recognize that it's valuable/interesting?
  2. In the case of a convex optimization problem, is there any obvious reason to expect that strong duality should (usually) hold?
  3. It often happens that the dual of the dual problem is the primal problem. However, this seems like a complete surprise to me. Is there any intuitive reason to expect that this should happen?
  4. Does the use of the word "dual" or "duality" in optimization have anything to do with the dual space in linear algebra? Or are they just different concepts that go by the same name. What about the use of the word "dual" in projective geometry — is there a connection there?
  5. You can define the dual problem and prove theorems about strong duality without ever mentioning the Fenchel conjugate. For example, Boyd and Vandenberghe prove a strong duality theorem without mentioning the Fenchel conjugate in their proof. And yet, people often talk as if the Fenchel conjugate is somehow the "essence" of duality, and make it sound as if the whole theory of duality is based on the Fenchel conjugate. Why is the Fenchel conjugate considered to have such fundamental importance?

Note: I will now describe my current level of understanding of the intuition behind the dual problem. Please tell me if you think I might be missing any basic insights.

I have read the excellent notes about convex optimization by Guilherme Freitas, and in particular the part about "penalty intuition". When we are trying to solve

\begin{align*} \text{minimize} &\quad f(x) \\ \text{such that} & \quad h(x) \leq 0 \end{align*}

one might try to eliminate the constraints by introducing a penalty when constraints are violated. This gives us the new unconstrained problem

\begin{equation} \text{minimize} \quad f(x) + \langle \lambda ,h(x) \rangle \end{equation}

where $\lambda \geq 0$. It's not hard to see that for a given $\lambda \geq 0$, the optimal value of this unconstrained problem is less than or equal to the optimal value for the constrained problem. This gives us a new problem — find $\lambda$ so that the optimal value for the unconstrained problem is as large as possible. That is one way to imagine how somebody might have thought of the dual problem. Is this the best intuition for where the dual problem comes from?

Another viewpoint: the KKT conditions can be derived using what Freitas calls the "geometric intuition". Then, if we knew the value of the multipliers $\lambda$, it would be (often) much easier to find $x$. So, a new problem is to find $\lambda$. And if we can somehow recognize that $\lambda$ is a maximizer for the dual problem, then this suggests that we might try solving the dual problem.

Please explain or give references to any intuition that you think I might find interesting, even if it's not directly related to what I asked.


Here's what's really going on with the dual problem. (This is my attempt to answer my own question, over a year after originally asking it.)

(A very nice presentation of this material is given in Ekeland and Temam. These ideas are also in Rockafellar.)

Let $V$ be a finite dimensional normed vector space over $\mathbb R$. (Working in an inner product space or just in $\mathbb R^N$ risks concealing the fundamental role that the dual space plays in duality in convex optimization.)

The basic idea behind duality in convex analysis is to think of a convex set in terms of its supporting hyperplanes. (A closed convex set $\Omega$ can be "recovered" from its supporting hyperplanes by taking the intersection of all closed half spaces containing $\Omega$. The set of all supporting hyperplanes to $\Omega$ is sort of a "dual representation" of $\Omega$.)

For a convex function $f$ (whose epigraph is a convex set), this strategy leads us to think about $f$ in terms of affine functions $\langle m^*, x \rangle - \alpha$ which are majorized by $f$. (Here $m^* \in V^*$ and we are using the notation $\langle m^*, x \rangle = m^*(x)$.)

For a given slope $m^* \in V^*$, we only need to consider the "best" choice of $\alpha$ -- the other affine minorants with slope $m^*$ can be disregarded. \begin{align*} & f(x) \geq \langle m^*,x \rangle - \alpha \quad \forall x \in V \\ \iff & \alpha \geq \langle m^*, x \rangle - f(x) \quad \forall x \in V \\ \iff & \alpha \geq \sup_{x \in V} \quad \langle m^*, x \rangle - f(x) \end{align*} so the best choice of $\alpha$ is \begin{equation} f^*(m^*) = \sup_{x \in V} \quad \langle m^*, x \rangle - f(x). \end{equation} If this supremum is finite, then $\langle m^*,x \rangle - f^*(m^*)$ is the best affine minorant of $f$ with slope $m^*$. If $f^*(m^*) = \infty$, then there is no affine minorant of $f$ with slope $m^*$.

The function $f^*$ is called the "conjugate" of $f$. The definition and basic facts about $f^*$ are all highly intuitive. For example, if $f$ is a proper closed convex function then $f$ can be recovered from $f^*$, because any closed convex set (in this case the epigraph of $f$) is the intersection of all the closed half spaces containing it. (I still think the fact that the "inversion formula" $f = f^{**}$ is so simple is a surprising and mathematically beautiful fact, but not hard to derive or prove with this intuition.)

Because $f^*$ is defined on the dual space, we see already the fundamental role played by the dual space in duality in convex optimization.

Given an optimization problem, we don't obtain a dual problem until we specify how to perturb the optimization problem. This is why equivalent formulations of an optimization problem can lead to different dual problems. By reformulating it we have in fact specified a different way to perturb it.

As is typical in math, the ideas become clear when we work at an appropriate level of generality. Assume that our optimization problem is \begin{equation*} \operatorname*{minimize}_{x} \quad \phi(x,0). \end{equation*} Here $\phi:X \times Y \to \bar{\mathbb R}$ is convex. Standard convex optimization problems can be written in this form with an appropriate choice of $\phi$. The perturbed problems are \begin{equation*} \operatorname*{minimize}_{x} \quad \phi(x,y) \end{equation*} for nonzero values of $y \in Y$.

Let $h(y) = \inf_x \phi(x,y)$. Our optimization problem is simply to evaluate $h(0)$.

From our knowledge of conjugate functions, we know that \begin{equation*} h(0) \geq h^{**}(0) \end{equation*} and that typically we have equality. For example, if $h$ is subdifferentiable at $0$ (which is typical for a convex function) then $h(0) = h^{**}(0)$.

The dual problem is simply to evaluate $h^{**}(0)$.

In other words, the dual problem is: \begin{equation*} \operatorname*{maximize}_{y^* \in Y^*} \quad - h^*(y^*). \end{equation*} We see again the fundamental role that the dual space plays here.

It is enlightening to express the dual problem in terms of $\phi$. It's easy to show that the dual problem is \begin{equation*} \operatorname*{maximize}_{y^* \in Y^*} \quad - \phi^*(0,y^*). \end{equation*}

So the primal problem is \begin{equation*} \operatorname*{minimize}_{x \in X} \quad \phi(x,0) \end{equation*} and the dual problem (slightly restated) is \begin{equation*} \operatorname*{minimize}_{y^* \in Y^*} \quad \phi^*(0,y^*). \end{equation*} The similarity between these two problems is mathematically beautiful, and we can see that if we perturb the dual problem in the obvious way, then the dual of the dual problem will be the primal problem (assuming $\phi = \phi^{**}$). The natural isomorphism between $V$ and $V^{**}$ is of fundamental importance here.

The key facts about the dual problem -- strong duality, the optimality conditions, and the sensitivity interpretation of the optimal dual variables -- all become intuitively clear and even "obvious" from this viewpoint.

An optimization problem in the form \begin{align*} \operatorname*{minimize}_x & \quad f(x) \\ \text{subject to} & \quad g(x) \leq 0, \end{align*} can be perturbed as follows: \begin{align*} \operatorname*{minimize}_x & \quad f(x) \\ \text{subject to} & \quad g(x) + y \leq 0. \end{align*}

This perturbed problem has the form given above with \begin{equation*} \phi(x,y) = \begin{cases} f(x) \quad \text{if } g(x) + y \leq 0 \\ \infty \quad \text{otherwise}. \end{cases} \end{equation*} To find the dual problem, we need to evaluate $-\phi^*(0,y^*)$, which is a relatively straightforward calculation. \begin{align*} -\phi^*(0,y^*) &= -\sup_{g(x) + y \leq 0} \quad \langle y^*,y \rangle - f(x) \\ &= -\sup_{\substack{ x \\ q \geq 0 }} \quad \langle y^*, -g(x) - q \rangle - f(x) \\ &= \inf_{\substack{ x \\ q \geq 0 }} \quad f(x) + \langle y^*, g(x) \rangle + \langle y^*, q \rangle. \end{align*} We can minimize first with respect to $q$, and we will get $-\infty$ unless $\langle y^*, q \rangle \geq 0$ for all $q \geq 0$. In other words, we will get $-\infty$ unless $y^* \geq 0$.

The dual function is \begin{equation*} -\phi^*(0,y^*) = \begin{cases} \inf_x \quad f(x) + \langle y^*, g(x) \rangle \quad \text{if } y^* \geq 0 \\ -\infty \quad \text{otherwise}. \end{cases} \end{equation*}

This is the expected result.


I'll take a crack at a couple of these questions (some of them are hard and would require more thought).

1) Here's a nice economic interpretation of duality (being a bit "fast and loose" with the details). Let's say $x$ represents the design of a widget, and $f(x)$ is the cost you will incur producing it. $x$ must satisfy "design constraints" given by $h(x)\leq 0$ (suppose for simplicity that we have only 1 constraint function). Out of curiosity, you decide to try farming out production: another company agrees to produce your thingy $x$ and "charge" you $f(x)$ for it. Their goal is ultimately to maximize profit, but they can't charge you more than you would spend doing it yourself. So given a fixed $\lambda$, they need to find the design $x$ that minimizes $f(x)+\lambda h(x)$. If they don't, you'll be able to find a feasible design $y$ that has a lower cost $f(y)<f(x)$, and thus you'll make the widget yourself. Now that this company can do at least as good as you, they'll set about maximizing their profit by varying $\lambda$ (maybe $\lambda$ corresponds to a different process or something).

This interpretation is similar to that in Boyd and Vandenberghe, 5.4.4. Also I find 'game theoretic' interpretations helpful - the primal problem is your strategy while the dual corresponds to an opponent's strategy.

2) I don't have an intuitive answer for this other than to say that strong duality is equivalent to existence of a saddle point of the Lagrangian.

3) For convex problems this makes sense because applying the convex conjugate (Fenchel transform) twice returns the 'convexification' of the original objective function, which is the same as the original function in most 'nice' situations.

4) Yes, but you definitely have to be careful here. The dual of a vector space is formally defined as the space of all continuous linear functionals on that space, and this concept lives 100% independently of optimization theory. However, you're correct to notice that the dual of a vector space does arise in the statement of the dual of an optimization problem. Let me explain: define $X=\mathbb{R}^n$ and $Y=\mathbb{R}^m$ and consider the problem:

$$ \text{minimize}\quad f(x)\\ \text{such that} \quad h(x)\leq 0\\ x\in X,\;f:X\rightarrow \mathbb{R},\\ h:X\rightarrow Y. $$ Then, this problem has the following dual:

$$ \max_\lambda\quad \inf_{x\in X} \{f(x)+\langle \lambda,h(x)\rangle\}\\ \text{such that}\quad \lambda_i\geq 0,~~\forall i: 0\leq i\leq m $$ Now, "formally", $\lambda$ is an element of the dual space of $Y$, since we're considering an inner product of the form $\langle\lambda, y\rangle$ where $y\in Y$, and hence can think of this as a continuous linear functional on $Y$. But here $Y$ is finite dimensional, so (by the Riesz representation theorem) $Y^*$ is actually isomorphic to $Y$, so the distinction isn't really necessary and you haven't gained anything by thinking of $\lambda$ as being an element of the dual space. It's possible that in infinite dimensional problems where $Y^*$ is not isomorphic to $Y$ you get something out of this, but I can't think of a good example.

To my knowledge, there is zero connection between duality in projective geometry and duality in optimization. Duality in projective geometry is more a statement about the bijection between points and rays that defines projective space. "Duality" in math really just means having 2 ways to think about a problem - it's as overused as words like "fundamental" and "canonical". Another classical example of duality comes from fluid dynamics and PDE - "Eulerian" coordinates vs. 'Lagrangian' coordinates.

5) For convex problems, I would agree that the convex conjugate is key, though in general I'm not sure it helps a lot with the 'general' idea of duality since (a) not all interesting problems are convex and (b) the convex conjugate is legendarily hard to gain intuition for. Historically this 'fundamental importance' of the convex conjugate might be traced to physics, where it turns out that a lot of interesting physical systems have a convex 'Lagrangian' describing their total energy. The equations of motion can then be phrased essentially as a convex minimization problem with respect to this Lagrangian. The convex conjugate function, it then turns out, is what is called the 'Hamiltonian', which leads to an entirely different (one might say 'dual') formulation of the equations of physics. Thus we have yet again two ways to approach the same problem!

Hope this was somewhat useful to you.


Consider the problem $$ \begin{aligned} \mbox{min} \quad& f(x) \\ \mbox{subject to} \quad& x\le a\\ \end{aligned} $$ illustrated below and where $f$ is convex: enter image description here

The essential features are there in this problem despite it being the most basic problem with an inequality. Now consider the optimum given as a function of $a$. So $f_+(a)=\inf_{x\le a}f(a)$. Its graph looks like this:

enter image description here

It's basically like the original $f$ except that we only have the descending parts and none of the ascending parts.

We're going to construct $f_+$ by a slightly roundabout way, not by considering the function itself, but its epigraph, the set of points above the graph. Here's the epigraph for the original $f$:

enter image description here

Notice I've drawn a bunch of lines just touching the epigraph, ie. its subtangents. In general, any convex shape can be expressed as the intersection of half-planes. Those subtangents indicate a handful of the boundaries of those half planes. We can construct those subtangents by considering the lines $y=\lambda x$ and then pushing those lines up or down so each is high as possible without crossing $f$. We can find out how much pushing we need by solving the optimisation: $$ \begin{aligned} \mbox{maximise} \quad& \lambda x-f(x) \end{aligned} $$ and we call the optimal value $f^\ast(\lambda)$. $f^\ast$ is also known as the Legendre transform or Fenchel dual. It's basically telling us which half-planes define our epigraph. Interestingly, reconstructing the function $f$ from $f^\ast$ requires exactly the same expression. So the Fenchel dual also takes a collection of half-planes and gives you back the original function.

So now we want to construct the epigraph of $f_+$. We just want the descending parts of $f$. It's easy to construct the epigraph, we take the intersection of just the half-planes whose boundaries are decreasing. Here's a picture:

enter image description here

So here's a process for constructing $f_+$: we take the Fenchel dual of $f$, we throw away the "half" where $\lambda\ge0$, and now take the inverse Fenchel dual (which is in fact just the Fenchel dual).

Now let's construct the dual to our original optimisation problem as described in numerous textbooks. We form the Lagrangian $$L(a,x,\lambda)=f(x)+\lambda (x-a)$$ We then form the function $$g(a,\lambda)=\min_x L(a,x,\lambda)$$ This is basically (modulo a sign) the Fenchel dual of $f$ with an extra $-\lambda a$ term.

The dual problem, as in the textbooks, is $$ \begin{aligned} \mbox{maximise} \quad& g(a,\lambda)\\ \mbox{such that} \quad & \lambda\ge0\\ \end{aligned} $$ Remembering that $-\lambda a$ term, that's almost the inverse Fenchel dual except we've maximised over just "half" of the $\lambda$s, those for which $\lambda\ge0$. When we compute the optimum of the dual problem we get $h(x)=\max_x g(a,x)$. If the problem is convex, we've just computed $f_+$ by another path.

So this gives what I think is a clear picture of what the dual problem is. Forming the dual problem is (modulo a vertical shift) converting the original problem into finding the subtangents. Solving the dual problem is converting that back to a function again, but using only the downward sloping lines.

I'm hoping this also answers some of the other questions.

By the way, this is connected with duals in linear algebra. We typically define a half-space using $h(x)\le\mbox{constant}$ where $h$ is linear in $x$. So $h$ is an example of what is meant by a dual vector in linear algebra. When we work from the dual viewpoint we're looking at convex objects from the point of view of the dual vectors defining the hyperplanes that touch it.

While I'm here I may as well add that this is similar to the Hilbert transform in signal processing. The Hilbert transform involves throwing away "half" of the Fourier transform analogously to throwing away "half" of the Legendre transform. There's a deep connection.