Why can't the second fundamental theorem of calculus be proved in just two lines?

The second fundamental theorem of calculus states that if $f$ is continuous on $[a,b]$ and if $F$ is an antiderivative of $f$ on the same interval, then: $$\int_a^b f(x) dx= F(b)-F(a).$$

The proof of this theorem, which I have seen in both my book and in Wikipedia is pretty complex and long. It uses the mean value theorem of integration and the limit of an infinite Riemann summation. But I tried coming up with a proof (which I am sure is wrong) and it was barely two lines. Here it goes:

Since $F$ is an antiderivative of $f$, we have $\frac{dF}{dx} = f(x)$. Multiplying both sides by $dx$, we obtain $dF = f(x)dx$. Now $dF$ is just the small change in $F$ and $f(x)dx$ represents the infinitesimal area bounded by the curve and the $x$ axis. So integrating both sides, we arrive at the required result.

Firstly, what is wrong with my proof? And if it is so simple, what is so fundamental about it?

Multiplying the equation by $dx$ should be an obvious step to find area right? Why is the proof given in Wikipedia (or in my book) so long?

My teacher said that the connection between differential and integral calculus is not obvious, making the fundamental theorem a surprising result. But to me it is pretty trivial. So what were the wrong assumptions I made in the proof and what am I taking for granted?

It should be noted that I have already learnt differential and integral calculus and I am being taught the "fundamental theorem" in the end and not as the first link between the two realms of calculus.

In response to the answers below: If expressing infinitesimals on their own is not "rigorous" enough to be used in a proof, then what more sense do they make when written along with an integral sign, or even in the notation for the derivative? The integral is just the continuous sum of infinitesimals, correct? And the derivative is just the quotient of two. How else should these be defined or intuitively explained? It seems to me that one needs to learn an entirely new part of mathematics before diving into differential or integral calculus. Plus we do this sort of thing in physics all the time.


The problem with your proof is the assertion

Now $dF$ is just the small change in $F$ and $f(x)dx$ represents the infinitesimal area bounded by the curve and the $x$ axis.

That is indeed intuitively clear, and is the essence of the idea behind the fundamental theorem of calculus. It's pretty much what Leibniz said. It may be obvious in retrospect, but it took Leibniz and Newton to realize it (though it was in the mathematical air at the time).

The problem calling that a "proof" is the use of the word "infinitesimal". Just what is an infinitesimal number? Without a formal definition, your proof isn't one.

It took mathematicians several centuries to straighten this out. One way to do that is the long proof with limits of Riemann sums you refer to. Another newer way is to make the idea of an infinitesimal number rigorous enough to justify your argument. That can be done, but it's not easy.


Edit in response to this new part of the question:

Plus we do this sort of thing in physics all the time.

Of course. We do it in mathematics too, because it can be turned into a rigorous argument if necessary. Knowing that, we don't have to write that argument every time, and can rely on our trained intuition. In fact you can safely use that intuition even if you don't personally know or understand how to formalize it.


Variations on your question come up a lot on this site. Here are some related questions and answers.

  • Most useful heuristic?

  • What is $dx$ in integration?

  • What is the logic behind decomposing a derivative operator symbol. In the population growth equation?

  • What does the derivative of area with respect to length signify?

  • Are there concepts in nonstandard analysis that are useful for an introductory calculus student to know?

  • The problem of instant velocity

  • Rigorous definition of "differential"

  • Is $\frac{\textrm{d}y}{\textrm{d}x}$ not a ratio?


Allow me to translate your line "Multiplying both sides by $dx$, we obtain $dF=f(x)dx$." into what, interpreted strictly, you said:

"Pretending that the symbols $\mathrm{d}x$ and $\mathrm{d}F$ have existence outside of the symbol $\frac{\mathrm{d}F}{\mathrm{d}x}$, which is unjustified, we can multiply both sides by $\mathrm{d}x$, obtaining $$ 0 = \mathrm{d}F = f(x) \mathrm{d}x = 0 \text{,} $$ which, while true, has destroyed all information in our equation."

Why is this? Because $\frac{\mathrm{d}F}{\mathrm{d}x}$ is defined to be $$ \lim_{h \rightarrow 0} \frac{F(x+h) - F(x)}{(x+h) - x} \text{.} $$ Assuming this limit exists (which happily you have asserted), we could attempt to apply limit laws to obtain $$ \frac{\lim_{h \rightarrow 0} F(x+h) - F(x)}{\lim_{h \rightarrow 0} (x+h) - x} \text{.} $$ However, this gives a denominator of $0$, so is disallowed by the limit laws. (In fact, it gives $0/0$, suggesting that one should be more careful in explaining how one is sneaking up on this ratio.) Since you ignore this problem, you have multiplied both sides of your equation by $\mathrm{d}x = \lim_{h \rightarrow 0} (x+h) - x = 0$. Fortunately, your remaining left-hand-side is $\lim_{h \rightarrow 0} F(x+h) - F(x) = 0$. So you arrive at the true equation $0=0$, but this is completely uninformative. There are no infinitesimals (whatever those are) remaining.

In response to OP's general response:

An integral is a limit of sums of non-infinitesimal quantities. An integral cannot be the sum of infinitesimals because the sum of any number of zeroes, even infinitely many zeroes, is zero. This is quite easy to see by considering the (ordinal indexed) sequence of partial sums, which are always zero.

The derivative is an indeterminate form of type "$0/0$". The integral is an indeterminate form of the type "$\infty \cdot 0$". As I note above, we must be careful in how we sneak up on such forms to avoid absurdities.

Attempts to use infinitesimals rigorously failed. (From the "Continuity and Infinitesimals" article of the Stanford Encyclopedia of Philosophy)

However useful it may have been in practice, the concept of infinitesimal could scarcely withstand logical scrutiny. Derided by Berkeley in the 18th century as “ghosts of departed quantities”, in the 19th century execrated by Cantor as “cholera-bacilli” infecting mathematics, and in the 20th roundly condemned by Bertrand Russell as “unnecessary, erroneous, and self-contradictory”

You observe that it seems one must learn some other form of mathematics before attempting derivatives and integrals. I agree. To rigorously compute limits of difference quotients (derivatives) and limits of Riemann sums (integrals), one should first learn to find the limits of plain sequences. But there is a bootstrapping problem. As a consequence, in practice, we teach what one might call naive differentiation and integration in Calculus I/II/III and rigorous differentiation and integration in some class with a name like Advanced Calculus. The recipes for differentiating the common basket of functions (polynomials, trig function, exponentials, and logs) are simple enough to teach early on. But there is a full $\epsilon$-$\delta$ treatment of use to those who face functions not in that basket.

In the 20th century, there has been some progress in making infinitesimals rigorous. Useful articles are nonstandard analysis and dual numbers. (Aside: the first words of the nonstandard analysis article are

The history of calculus is fraught with philosophical debates about the meaning and logical validity of fluxions or infinitesimal numbers. The standard way to resolve these debates is to define the operations of calculus using epsilon–delta procedures rather than infinitesimals."

Since one wishes to perform mathematics starting from self-evident truths, one rejects objects with debatable meaning or questionable logical validity.) There are criticisms of nonstandard analysis. While I know that dual numbers can be used for automatic differentiation, I have never seen an attempt to use them as infinitesimals in a theory of integration.