On the absolute value of Jacobian determinant - variable transformation in multi-integral

I would like to change some variables in a integral and encountered to an issue. I create here 2 simple examples to describe my questions:

Exp 1. Suppose we want to change $(x,y)$ to $(u,v)$ such that: $x= u + v$ and $y= -u -2v$.

Using the Chain Rule: $$dx dy = (du + dv) (-du -2dv) = -2du dv -dv du = -2du dv + du dv = -du dv.$$

On the other hand, using the Jacobian determinant formula, we need the absolute value of Jacobian determinant, which is $|-1| = 1$. So: $$dx dy = 1. du dv = du dv \ne -du dv.$$ (Ref: http://www.math24.net/change-of-variables-in-double-integrals.html)

What was wrong here?

Exp 2. With one variable, we have $dx = x^'_u du$ - so no absolute value is needed. What is the main difference between the one variable case and the multi-variables case?

I really appreciate if anyone could give me good references for this. Thanks in advance!


Consider a simple case of univariate integral $\int_0^1 x \mathrm{d} x$ and apply to it a change of variables $x = 1-y$. It is easy to see that it flips integration range, and this is reflected in the Jacobian being negative.

$$ \int_0^1 x \mathrm{d} x = \int_1^0 (1-y) (- \mathrm{d} y) = -\int_0^1 (1-y) (-\mathrm{d} y) = \int_0^1 (1-y) \mathrm{d} y $$

The sign of the Jacobian indicates where the change of variable is, or is not, orientation preserving (i.e. whether it flips integration limits or not).

One usually write $\vert J \vert$ keeping in mind that the orientation is being preserved.

The same story holds in the multivariate settings as well. Consider a line, defined by $F(x,y) = x + y - 2$. The normal vector is $\vec{n} = \left\{ \partial_x F, \partial_y F \right\} = \left\{ 1, 1 \right\}$. When your change of variables is applied, the direction of the normal is flipped: $$ F(u,v) = (u+v) + (-u-2v) - 2 = -v - 2 \implies \vec{n}_2 = \{\partial_u F, \partial_v F\} = \{0, -1\} $$ Notice that $\langle \vec{n}, \vec{n}_2 \rangle = -1$.


Ad Exp 1: There is no such thing as $dx\>dy$ which can be computed by means of the chain rule. But there is the "unsigned" (actually: unoriented) Lebesgue measure ${\rm d}(x,y)$ in $(x,y)$-space and the "unsigned" (actually: unoriented) Lebesgue measure ${\rm d}(u,v)$ in $(u,v)$-space, and these two are related by the "symbolic equation" $${\rm d}(x,y)=|J(u,v)|{\rm d}(u,v)\ .$$ Ad Exp 2: Given an interval $[a,b]\subset {\mathbb R}$ with $a\leq b$ the Fundamental Theorem of Calculus says that $$\int\nolimits_{[a,b]} f(x)\>{\rm d}(x)=\int_a^b f(x)\ dx$$ where on the left side we have a limit of Riemann sums and on the right side a difference $F(b)-F(a)$ with $F(\cdot)$ a primitive of $f$. This difference is an "oriented thing" to start with, e.g., one has $\int_b^a\ldots =-\int_a^b\ldots\ $. The point is this: As soon as you are not just "calculating a volume", but the connection between functions and their derivatives (or primitives) comes into play, as in Stokes' theorem, you will have to deal with "oriented things", and the language of differentials is the right environment to handle that. See also the following source:

http://www.math.ucla.edu/~tao/preprints/forms.pdf