is it possible to motivate higher differential forms without integration?

$\def\RR{\mathbb{R}}$I have in fact tried to tackle this problem when I teach higher differential forms. Here are some of the things I try.

Before we do differential forms or manifolds For functions $f : \RR^n \to \RR^p$, introduce $Df$ and prove the multivariate change of variables formula.

Let $f$ be a function on $\RR^n$. Define the Hessian of $f$, $D^2 f$, to be the matrix of second partials $\tfrac{\partial^2 f}{(\partial x_i) (\partial x_j)}$. Prove the multivariate second derivative test: If $Df=0$ at $c$ and $D^2 f$ is positive definite, then $c$ is a local minimum for $f$. (This isn't logically necessary for what follows, but helps show this is a useful concept.)

Show that, if $c$ is a critical point, then the Hessian has a simple chain rule like formula. Show that, if $c$ is not a critical point, the formula for changing variables inside the Hessian is a mess.

Also, during this pre-manifold time, I talk about the curl of a vector field on $\RR^2$ and prove Stokes' theorem for rectangles; but you asked me not to mention integration.

Spend a bunch of time working with and getting used to $1$-forms, still in $\RR^n$ Note that the multivariate chain rule means that $df$ is well defined. Note that $D^2 f$ is well defined as a quadratic form on the tangent space at critical points, but is not a well defined quadratic form on the tangent space in general. All of this is just the computations from the pre-manifold discussion, now placed in a more sophisticated context. This means that we don't have a coordinate independent notion of second derivative.

If we can't take the second derivative of a function, can we at least take the first derivative of a $1$-form? In coordinates, if we have $\omega = \sum g_i d x_i$, we can form the matrix of partials $\tfrac{\partial g_i}{\partial x_j}$. Could this be coordinate independent in some sense?

A problem! If $\omega = df$, then $\tfrac{\partial g_i}{\partial x_j}$ is the Hessian, which we just saw was bad. Let's try skew symmetrizing this matrix, to give $\tfrac{\partial g_i}{\partial x_j} - \tfrac{\partial g_j}{\partial x_i}$. This throws away the Hessian: If $\omega = df$, we just get $0$. Is what is left over any better?

A miracle (which the students are assigned to grind out by brute force): This gives a well defined skew-symmetric form on the tangent space. Define a $2$-form on $\RR^n$, and explain that we have just constructed $d: \Omega^1 \to \Omega^2$ and shown that it is well defined. In particular, when $n=2$, we have just shown that curl is a well-defined skew-symmetric form.


So if we're going to insist on avoiding integration, I suppose we'd better take some more derivatives. In particular, we want to learn how to take derivatives of tensors. Of course, famously, without some additional structure on even a smooth manifold, there is no such generally well-defined notion. In fact, there are three common types of derivatives of tensors:

  1. covariant derivatives;
  2. Lie derivatives; and
  3. exterior derivatives.

The former two require even more extra structure (a metric/connection, or an extension of tangent vectors to vector fields when we define directional derivatives); the latter requires instead a restriction on the class of tensors we can differentiate - to differential forms! So clearly the central idea in our motivation should be to answer the question

For what class of tensors can a derivative be defined with no further structure?

Focussing on covariant $(0,m)$ tensors as the objects which can be formed from scalar functions just by "taking derivatives" and multiplying them, a formal answer to this question is:

Theorem: For any connected smooth manifold $M$, if $T$ maps differentiable covariant tensor fields of type $(0,m)$ to those of type $(0,m+1)$ and is natural in the technical sense that for all (diffeomorphic) automorphisms $\phi$ we have $\phi^\star(T \omega) = T(\phi^\star \omega)$, then $T = k \ {\rm d}$ is a multiple of the exterior derivative. In particular, it is characterized entirely by its action on totally antisymmetric tensors, i.e. differential forms, and vanishes on all tensors with other symmetry structures.

A fairly straightforward if tedious proof of this can be constructed by following e.g. Natural operations on covariant tensor fields (Leicher). (In fact, various stronger results are true; essentially ${\rm d}$ is more or less the unique natural differential operator acting on only one tensor. There are discussions in e.g. this MathOverflow question as well as in the Leicher paper.)


But for the purposes of motivation, what's the basic idea underlying this observation? Well, in order to be invariant under coordinate transformations, the expression for any such $T$ in local coordinates must be $$ (T \omega)_{i_1\ldots i_{m+1}} = c_{i_1 \ldots i_{m+1}}^{j_1 \ldots j_{m+1}} \frac{\partial}{\partial x^{j_1}} \omega_{j_2 \ldots j_{m+1}} $$ Why is this? Any explicit $x^i$ dependence on the RHS violates invariance under shifting coordinates $x^i$, and homogeneous rescaling of coordinates implies that the RHS must have weight $-1$ in $x$, and by smoothness this must arise from a single derivative with respect to $x$. (One might quite reasonably even impose these requirements as part of your attempt to define a derivative.)

Now we have to decide what linear combinations of the derivatives of $\omega$ coordinates can possibly be invariant under coordinate transformations. This is the effort of (4.2) in Leicher. But the gist of it is simply that under coordinate transformations, the LHS transforms by a product of $m+1$ terms $\partial x^j / \partial y^i$, whilst the RHS also involves $m+1$ such factors, with $m$ of them appearing inside the existing derivative. But for invariance, one ultimately needs that we are left only with the term with a derivative acting on the components of $\omega$. This can only be achieved if the $j_i$ are totally antisymmetrized, in which case all terms where the derivative $\partial / \partial y^{j_1}$ acts upon $\partial x^{k_p} / \partial y^{j_p}$ vanish due to the symmetry of partial derivatives.

(In some sense, the result therefore comes down to the simple, neat fact that 'star transpositions' $(1 \ p)$ generate the whole symmetric group -- by requiring that the derivative is antisymmetrized with every index of $\omega$, we require that all indices of $\omega$ are antisymmetrized with each other.)

Therefore, in particular, only the totally antisymmetric part $\omega_{[j_2\ldots j_{m+1}]}$ contributes to $T \omega$.


So whilst it would be nice to tidy this up and give a briefer and ideally coordinate-free version of the argument, the idea is just that in a simple technical sense

the exterior derivative is the only natural notion of differentiation of a tensor

and the intuitive fact this rests upon is that

totally antisymmetrizing indices is the only way to avoid derivatives acting upon the Jacobian factors arising under a change of coordinates.


Regarding the sentence:

"Alternatively, answer this question in the negative by proving that Dennis and Inez will always invent integration if you make them think about higher exterior derivatives enough."

The answer is "yes" (but "long enough" might take longer than their life expectancy). There are two ways to approach this:

a. Suppose they study smooth maps of $k$-manifolds into an $m$-manifold $M$ and come up with an idea of making a vector space out of this by working with chains (with real coefficients). Then they could ask "what is the dual vector space"? Under the extra analytical assumptions on the element of the dual space which I will not spell out here, the elements of the dual space are differential $k$-forms on $M$ (and vice versa). See

Whitney, Hassler, Geometric integration theory, Princeton Mathematical Series. Princeton, N. J.: Princeton University Press; London: Oxford University Press. XV, 387 p. (1957). ZBL0083.28204.

Theorem 10A, page 167, for details. Then they might discover the exterior differential as the dual of the boundary operator on chains (Stokes' Theorem).

b. Conversely, if they discover (somehow) differential forms, they might ask themselves "what is the dual space" of $\Omega^k(M)$?" The elements of the dual are know as currents. They probably will then try to find "concrete examples" of currents which will be, most likely, Lipschitz currents, given by integration over Lipschitz submanifolds.