Is every monotone map the gradient of a convex function?

Recently in a seminar someone mentioned that monotone maps are equivalent to gradients of scalar convex functions, but it's not clear to me why this is true. One direction of the equivalence is straightforward but the other is not (as far as I can tell).

Definition. A map $F:\mathbb{R}^n \rightarrow \mathbb{R}^n$ is monotone on a convex set $C$ if $$(y-x)^T(F(y)-F(x))\ge0$$ for all $x,y \in C$.

One direction of the equivalence:

Prop. Let $f:\mathbb{R}^n \rightarrow \mathbb{R}$ be convex and sufficiently differentiable. Then $\nabla f$ is monotone.

Pf. Convex differentiable functions satisfy $$f(y) \ge f(x) + \nabla f(x)(y-x).$$

By choosing the points in reverse, we also have, $$f(x) \ge f(y) + \nabla f(y)(x-y).$$ Add these inequalities and rearrange to get $(\nabla f(y)-\nabla f(x))(y-x) \ge 0$.∎

Now the other direction:

Prop. Let $F:\mathbb{R}^n \rightarrow \mathbb{R}^n$ be monotone and sufficiently differentiable. Then there exists a convex function $f:\mathbb{R}^n \rightarrow \mathbb{R}$ such that $F=\nabla f$.

Pf. ???

It seems like this should be easy, but I'm stuck and google/wikipedia have been of little help. I'm actually starting to doubt whether it is true.


There's still a bit missing from this. First, the concepts of convex functions and monotone operators are unrelated to Euclidean space, so giving the answers in terms of a coordinate choice $F = (F_1,F_2,...,F_n)$ is not ideal. Secondly, convex functions are not necessarily differentiable; instead, they have a subdifferential, and the subdifferential map is monotone.

So the broader question: when is a monotone map the subdifferential of a convex function?

That's a good question and it was answered by the pioneer of convex analysis, Rockafellar. In his 1970 book he has Thm 24.8 which covers the Euclidean space case, and he has papers (1966 and this 1970 correction https://sites.math.washington.edu/~rtr/papers/rtr031-MaxMonoSubdiff.pdf ) which cover the general Banach space case.

The result is: let $F$ be a mapping, then it is the subdifferential of a closed (lower semi-continuous) proper convex function $f$ if and only if $F$ is maximally cyclically monotone.

The definition of maximally cyclically monotone can be found on the first page of the 1970 paper above (in the definition, there are $n$ points, and this $n$ is arbitrary, not linked to the dimension).


Not all fields $F$ are gradients. If $F=(F_1,\dots,F_n)$ is $C^1$, a necessary condition for $F$ to be a gradient is that $$ \frac{\partial F_i}{\partial x_j}=\frac{\partial F_j}{\partial x_i},\quad 1\le i<j\le n. $$


Lion's answer has a correct statement about convexity, but without proof. I think a proof should be given in this thread, for future references.

Proposition. Suppose $f:U\to \mathbb R$ is a $C^1$ function, where $U$ is a convex domain. Then the following are equivalent:

  1. $f$ is convex.
  2. The restriction of $f$ to every line segment contained in $U$ is convex.
  3. $\nabla f$ is monotone.

Proof. The equivalence of 1 and 2 is immediate from the definition of convexity. Since the derivative of $f(x+tv)$ is $\langle\nabla f(x+tv), v\rangle$, the equivalence of 2 and 3 amounts to the fact that a one-variable function is convex if and only if its derivative is nondecreasing. $\Box$


Consider a $C^1$ monotone vector field $f=(f_1,\ldots,f_n)$ on $\mathbb R^n$. If there exists a function $G$ on $\mathbb R^n$ such that $G'=f$ than $G$ is automatically convex. So the existance of $G$ is the only thing one should verify.

Prop. The function $G$ exists if and only if $\frac{\partial f_i}{\partial x_j} = \frac{\partial f_j}{\partial x_i}$ for all $i$ and $j$.

The space $\mathbb R^n$ in contraclible. So $H^1(\mathbb R^n)=0$. Consequently $f=dG$ when $df=0$. It is equivalent to condition $\frac{\partial f_i}{\partial x_j} = \frac{\partial f_j}{\partial x_i}$.


I would like to give a counterexample

Let $F(x_1,x_2)=(-x_2, x_1)$. Then $F$ is differentiable and monotone. Indeed, for all $(x_1, x_2), (y_1, y_2)\in\mathbb{R}^2$ we have \begin{eqnarray*} \langle F(x_1, x_2)-F(y_1, y_2), (x_1, x_2)-(y_1, y_2)\rangle&=&\langle(-x_2+y_2, x_1-y_1), (x_1-y_1, x_2-y_2)\rangle\\ & =& -(x_2-y_2)(x_1-y_1)+(x_1-y_1)(x_2-y_2)\\ &=&0. \end{eqnarray*} Suppose that there exists a differentiable function $f(x_1, x_2)$ such that $F(x_1, x_2)=\nabla f(x_1, x_2)$. Then $$ \frac{\partial f}{\partial x_1}=-x_2, \quad \frac{\partial f}{\partial x_2}=x_1. $$ By the Schwarz's theorem we have $$ -1=\frac{\partial^2f}{\partial x_1\partial x_2}=\frac{\partial^2f}{\partial x_2\partial x_1}=1, $$ which is a contradiction.