Here is an analytic proof not using differential forms. Restricted to $n=3$, it comes almost verbatim from Stewart, Calculus: Early Transcendentals.

To get started, let us naively assume that region $U$ is convex. The reason for us to make the seemingly too-much assumption is that we want to give a description of $U$ in terms of a single two sided inequality (in the form $a\leq x_i\leq b$) for each $i$. Let $D$ be the projection of $U$ onto the $x_1x_2\cdots\hat{x_i}\cdots x_n$ hyperplane (to avoid confusion, $\hat{}$ means the variable under it is absent). Due to the convexity of $U$, we can write $U=\{(x_1,\cdots,x_n)\in \mathbf{R}^n|(x_1,\cdots,\hat{x_i},\cdots,x_n)\in D\mbox{ and } f_1\leq x_i \leq f_2\}$ for some $\mathbf{R}^{n-1}\to \mathbf{R}$ functions $f_1(x_1,\cdots,\hat{x_i},\cdots,x_n)$ and $f_2(x_1,\cdots,\hat{x_i},\cdots,x_n)$. Here $f_1$ and $f_2$ are $C^1$ by the definition of $\partial U$ being $C^1$(see the appendix in Evans).

Remember we want to prove $$\int_U\frac{\partial u}{\partial x_i}dx=\int_{\partial U}u\nu^idS.$$ The description of $U$ above allows us to write the LHS as an iterated integral $$\int_D\left(\int_{f_1(x_1,\cdots,\hat{x_i},\cdots,x_n)}^{f_2(x_1,\cdots,\hat{x_i},\cdots,x_n)}\frac{\partial u}{\partial x_i}dx_i\right)dA,$$ where $dA=dx_1\cdots\hat{dx_i}\cdots dx_n$. Apply the Fundamental Theorem of Calculus to the inner integral, we then have $$\int_Du(x_1,\cdots,f_2(x_1,\cdots,\hat{x_i},\cdots,x_n),\cdots,x_n)-u(x_1,\cdots,f_1(x_1,\cdots,\hat{x_i},\cdots,x_n)\cdots,x_n)dA.$$

This is all we can do to the LHS now. For the RHS, notice that $\partial U$ can be decomposed into three surfaces $S_2$, $S_3$ and $S_{1}$ where $\nu^i$ at all the points of $S_2$ are positive, $\nu^i$ at all the points of $S_3$ are zero ($S_3$ is parallel to the $x_i$ axis) and $\nu^i$ of $S_1$ are negative. Thus, RHS is

$$\int_{S_2}u\nu^idS+\int_{S_3}u\nu^idS+\int_{S_1}u\nu^idS.$$

Since $\nu_i$ for $S_3$ is zero, only the first and last term keep. From the geometric picture described at the beginning, the projection of $S_1$ and $S_2$ onto the $x_1x_2\cdots\hat{x_i}\cdots x_n$ hyperplane are exactly $D$. Also keep in mind that $\nu^i$ is the direction cosine of $\nu$ with the $x_i$ axis in $\mathbf{R}^n$. Therefore, $$\int_{S_2}u\nu^idS=\int_Du(x_1,\cdots,f_2(x_1,\cdots,\hat{x_i},\cdots,x_n),\cdots,x_n)d A,$$ and $$\int_{S_1}u\nu^idS=-\int_Du(x_1,\cdots,f_1(x_1,\cdots,\hat{x_i},\cdots,x_n),\cdots,x_n)dA.$$

This proves the theorem when $U$ is convex. For general $C^1$ $U$, I am thinking to cut the region into convex chunks so that when we glue them together the surface integral over touching faces are just cancelled due to opposite normal directions. Of course, I hope occasional "non-smoothie" won't introduce too much trouble.


There is a simple proof of Gauss-Green theorem if one begins with the assumption of Divergence theorem, which is familiar from vector calculus, \begin{equation} \int_{U}\mathrm{div}\,\mathbf{w}\,dx = \int_{\partial U} \mathbf{w}\cdot\mathbf{\nu}\,dS, \end{equation} where $\mathbf{w}$ is any $C^\infty$ vector field on $U\in\Bbb{R}^n$ and $\mathbf{\nu}$ is the outward normal on $\partial U$.

Now, given the scalar function $u$ on the open set $U$, we can construct the vector field \begin{equation} \mathbf{w}=(0,\ldots,0,u,0,\ldots,0), \end{equation} where $u$ is the $i$th component. Then, following the Divergence theorem, we have \begin{equation} \int_U \mathrm{div}\,\mathbf{w}\,dx=\int_U u_{x_i}\,dx =\int_{\partial U}\mathbf{w}\cdot\mathbf{\nu}\,dS =\int_{\partial U}u\nu^i\,dS. \end{equation}

In Evans' book (Page 712), the Gauss-Green theorem is stated without proof and the Divergence theorem is shown as a consequence of it. This may be opposite to what most people are familiar with.


It is a special case of both Stokes' theorem, and the Gauss-Bonnet theorem, the former of which has analogues even in network optimization and has a nice formulation (and proof) in terms of differential forms.

Some proofs are in:

  • Walter Rudin (1976), Principles of Mathematical Analysis
  • Robert & Ellen Buck (1978), Advanced Calculus (succinctly summarized in Denis Auroux's MIT OCW online lecture notes)
  • Harley Flanders, Differential Forms with Applications to the Physical Sciences (pp. 55-66)
  • Victor Katz (1979), The History of Stokes' Theorem
  • Lecture videos online (link 7 is Auroux's lecture)

Rudin's is as usual very clean and readable. Buck's/Arnoux's approach is very approachable and standard. It that is too geometric, try one like Flanders'. Boothby's Intro to Differentiable Manifolds and Riemannian Geometry (for example) has a relatively short proof using differential geometry. In general, the differential forms/geometry approaches are more analytic, but all rely on some way of decomposing regions (or some theorem or definition that treats the same concept) into simpler regions, which is natural when the integral is defined in terms of a Riemann sum.

At the risk of oversimplifying, I present a conceptual overview of a proof. The basic idea of Green's theorem is to see how to generalize the fundamental idea of calculus (FTOC) to several variables. In addition to its assumptions, we add one about the region $\overline{U}\subset\mathbb{R}^n$, e.g. that it is compact, which means closed and bounded in $\mathbb{R}^n$, or a union of bounded and simply connected regions; the exact assumption depends on the method of the proof and its requirements. But all the proofs that I have seen boil down to a decomposition of $U$ into a countable union $\cup_{\alpha}U_\alpha$ of disjoint subregions which are more amenable to applying the FTOC. The subregions are "amenable" in two senses: firstly, they can be sliced in the same direction parallel to a coordinate axis, thereby eliminating one space variable in a portion of $\int_Ud\omega$ to obtain a corresponding portion of $\int_{\partial U}\omega$. But secondly, they are adjacent.

$$ U=\cup_\alpha U_\alpha \qquad U_\alpha \text{ pairwise disjoint, convex} $$ $$ \omega=\sum_{i=1}^{n}\omega_i \qquad \omega_i= u\,dx^1 \wedge\cdots\wedge dx^{i-1} \wedge dx^{i+1} \wedge\cdots\wedge dx^n $$ $$ d\omega=\sum_{i=1}^n d\omega_i%=\nabla\cdotu\, dx^1\wedge\cdots\wedge dx^n \qquad d\omega_i=\frac{\partial u}{\partial x^i}\,dx^1\wedge\cdots\wedge dx^n$$ $$ \eqalign{ \int_{ U } d\omega &= \sum_i \int_{ U } d\omega_i = \sum_i \sum_\alpha \int_{ U_\alpha} d\omega_i \\&= \sum_i \sum_\alpha \int_{\partial U_\alpha} \omega_i = \sum_i \int_{\partial U } \omega_i &= \int_{\partial U } \omega } $$ The integral along the interior boundaries cancel out, due to orientation and adjacency. The above is a conceptual sketch of the derivation of Green's and Stokes' theorems. The portions to the right of a summation over $i$, dealing with the integrals $\int_{R}d\omega_i$ and $\int_{\partial R}\omega_i$ (for $R=U$ or $U_\alpha$), is Green's theorem. Typically, it is then used to derive Stokes' theorem.