Is there a fundamental reason that $\int_b^a = -\int_a^b$

Is there a fundamental reason that switching the order of the limits in an integral results in the negative, i.e., $$\int_b^af(x)\,dx = -\int_a^bf(x)\,dx?$$ As far as I can tell, this is just chosen as a convention so that the rule $$\int_a^bf(x)\,dx + \int_b^cf(x)\,dx = \int_a^cf(x)\,dx$$ works out. But I was wondering if there was some more fundamental reason for this, perhaps somehow relating to signed measures or something.

Background

The reason I'm asking is that we're trying to figure out how to define things like $$\sum_{n=4}^1n$$ in the computer algebra system SymPy (see this discussion). The natural thing, for me at least, is to define this as 0, since it represens a summation over an empty set ($\{x\,|\,4\leq x\leq 1\}$). But it seems that some authors define this as $-\sum_{n=2}^3n$, so that the convention $\sum_{n=a}^bf(n) + \sum_{n=b + 1}^c f(n)= \sum_{n=a}^cf(n)$ works out (namely, Karr in "Summation in Finite Terms"). That got me thinking about integrals, and whether in that case the rule is also defined simply for convenience, or whether there is actually a fundamental motivation behind it in the definition of the integral.

I know that summations are just special cases of integrals (at least in the Lebesgue sense), so learning why things work for integrals would help me understand how things should work for summations.


Go back to the definition of $\int_a^b f(x)\,dx$ as the limit of a Riemann sum. Look at how $\Delta x$ was defined. Therein lies your answer.

Remember when integrating from $a$ to $b$, we had $\Delta x_i = x_{i+1}-x_i$ whereas if we integrate from $b$ to $a$, then $\Delta x_i=x_i-x_{i+1}$, which is the negative of the $\Delta x$ result before.


In order to deal with this question one has to distinguish (i) integrals with respect to a measure and (ii) integrals over a chain. For integrals over real intervals$[a,b]$ they are pretty much the same, therefore the difference does not become visible in everyday notation.

(i) After constructing Lebesgue measure $\mu$ on ${\mathbb R}$ and the integral with respect to this measure it makes sense to consider expressions like $$\int_{[a,b]} f\ {\rm d}\mu\ .$$ Here $-\infty<a\leq b<\infty$, and the interval $[a,b]$ is just a measurable set, but has no forward or backwards orientation. (I use the roman ${\rm d}$ to indicate that the "unsigned measure" is meant.)

(ii) Contrasting this, the Riemann integral $$\int_a^b f(x)\ dx\ :=\ \lim_\ldots\ \sum_{k=1}^N f(\xi_k)\ (x_k-x_{k-1})$$ is conceptually an integral over a chain $\gamma$. The factors $(x_k-x_{k-1})$ are meant to be small differences, not measures of small intervals. The chain $\gamma$ connects the points $a$ and $b$, starting at $a$ and ending at $b$, and $b<a$ is allowed. In any case the boundary of this chain is the formal sum $\{b\}-\{a\}$. In this setup it is obvious that for arbitrary $a$, $b$, $c$ one has $$\int_b^a f(x)\ dx=-\int_a^b f(x)\ dx\ ,\quad \int_a^c f(x)\ dx=\int_a^b f(x)\ dx+\int_b^c f(x)\ dx\ .$$ The fundamental theorem of calculus, together with the accompagning set of rules for handling integrals, is based on this interpretation of the integral.

It is a fact that for reasonable $f$ and $a\leq b$ one has $$\int_{[a,b]} f\ {\rm d}\mu =\int_a^b f(x)\ dx\ .$$ Therefore we tend to replace the notation on the LHS of this equation by the notation on the RHS even in cases where only the Lebesgue integral makes sense.