How do you prove that $\{ Ax \mid x \geq 0 \}$ is closed?
Let $A$ be a real $m \times n$ matrix.
How do you prove that $\{ Ax \mid x \geq 0, x \in \mathbb R^n \}$ is closed (as in, contains all its limit points)?
The inequality $x \geq 0$ is interpreted component-wise.
This fact is used in some proofs of Farkas's lemma. It seems like it should be easy, but the proof I've seen seems to be unexpectedly complicated. Is there a very clear / easy / obvious proof of this fact?
(Note that linear transformations do not always map closed sets to closed sets, as discussed in this question. For example, let $S = \{ (x,y) \in \mathbb R^2 \mid y \geq e^x \}$ and let $T:\mathbb R^2 \to \mathbb R^2$ such that $T(x,y) = (0,y)$. Then $S$ is closed, but $T(S)$ is not closed.)
Edit: Here is a simple proof in the case where $A$ has full column rank. (A very similar proof is given in Nocedal and Wright, in the Notes and References at the end of chapter 12.) Let $y^*$ be a limit point of $\Omega = \{ Ax \mid x \geq 0, x \in \mathbb R^n \}$. There exists a sequence $(x_i)_{i=1}^\infty$ of points in $\mathbb R^n$ such that $x_i \geq 0$ for all $i$ and $A x_i \to y^*$ as $i \to \infty$. Let $B$ be a left inverse for $A$. Then $B A x_i \to B y^*$ as $i \to \infty$. In other words, $x_i \to x^*$ as $i \to \infty$, where we have defined $x^* = B y^*$. Clearly, $x^* \geq 0$ and $A x^* = y^*$. This shows that $y^* \in \Omega$.
(Alternatively, you could just note that if $A$ has full column rank then the mapping $x \mapsto Ax$ is a homeomorphism between $\mathbb R^n$ and $R(A)$, so it maps closed sets to closed sets. This shows that $\Omega$ is a closed subset of $R(A)$, and it follows that $\Omega$ is a closed subset of $\mathbb R^m$.)
We denote by $a_i \in \mathbb R^m$, $i = 1, \ldots, n$ the columns of $A$. By a conic variant of Carathéodory's theorem, each conic combination of $\{a_i\}$ can be written as a conic combination of a linearly independent subset of $\{a_i\}$. Since there are only finitely many linearly independent subsets of $\{a_i\}$, it is sufficient to prove the claim for matrices $A$ which have full column rank (i.e., all columns are linearly independent). But in this case, the claim is easy to establish.
Here is an alternative approach. I wish I could give proper attribution – I am fairly certain I learned this in the context of some nonlinear programming coursework, but it's been so long ago, my memory is fuzzy. Someone on the faculty at either Cornell or MIT in the early 90's showed me this, or at least gave me a hint. :-)
The approach relies on the following fact: any linear optimization problem (with finitely many variables and constraints) that has a finite optimal value attains that optimal value, i.e., it has at least one optimal solution. There are many ways of proving this fact, but there is one approach that does not rely on any convex analysis, and thus using this fact would not lead us to a circular argument. (For those of you curious, this is a classic argument in linear programming: (1) Any feasible linear program (LP) can be reformulated as an equivalent LP in standard form; (2) Any feasible LP in standard form is amenable to the Simplex algorithm; (3) If the above LP is not unbounded, and an anticycling pivoting rule is used, the Simplex method will terminate finitely and produce an optimal solution; this solution of the standard form LP can then be easily transformed into an optimal solution of the original LP.)
So, suppose $y$ is a limit point of the set in question, and consider the following optimization problem: $$ \inf_{x} \|y-Ax\|_\infty\text{ subject to } x\ge 0. $$ (Here, $\|\cdot\|_\infty$ denotes the $\infty$-norm.) This optimization problem is clearly feasible (e.g., $x=0$ is a feasible solution), and its optimal value is $0$, since there exists a sequence of feasible solutions $\{x^k\}_k$ such that $\lim_{k\rightarrow\infty}Ax^k=y$.
As written, this optimization problem is not an LP, but can be reformulated as an equivalent LP using well-known reformulation "tricks." Therefore, we can conclude that the problem has an optimal solution, to wit, a point $x\ge 0$ such that $\|y-Ax\|_\infty=0$, or $y=Ax$, as desired.
The cone $C=\{y\colon Ax=y,x\ge 0\}$ is finitely generated (by finitely many columns of $A$) and convex. By Minkowsky-Weyl theorem (en easy proof via Fourier-Motzkin eliminations can be found here, Theorem 1) it is a polyhedral cone, that is, $C=\{y\colon By\le 0\}$. From the last representation it is clear that $C$ is closed as an intersection of closed half-spaces.
OK, after struggling with elementary tools for a while and in vain, I had to invoke the "Closed Map Lemma", namely that a proper map (i.e one for which pre-images of compact sets are compact) between locally compact Hausdorff spaces (i.e a space in which every point has a compact neighborhood) is closed. For example, see Theorem 2.6 of this paper.
In your case, assuming WLOG that $A$ has full column rank (see @gerw's remark above), the linear operator $A: x \mapsto Ax$ is a proper mapping between the (trivially) locally compact Hausdorff spaces $\mathbb R^n$ and $\mathbb R^m$, and the $n$th nonnegative orthant $\mathbb R_+^n = [0,\infty)^n$ is a closed subset of the former space. This proves that $\{Ax \text{ s.t } x \ge 0\} = A \mathbb R_+^n$ is closed in $\mathbb R^m$, and we're done.