Why do people say the Fundamental Theorem of Calculus is so amazing? [closed]
The fundamental theorem of calculus in the fifth edition of Stewarts Calculus text is stated as:
Suppose $f$ is continuous on $[a,b]$. Then:
1). If $g(x) = \int_a^x f(t)dt$, then $g'(x)=f(x)$
2). $\int_a^bf(x)dx=F(b)-F(a)$ where $F$ is any antiderivative of $f$.
So, when we interpret $g(x)$ as the function that tells us the "area so far" under the graph of $f$, I think $(1)$ is pretty straightforward... honestly, it seems like with all the brilliant minds that came before Newton/Leibniz that this is something that should have already been clearly understood.
So, the reason the FTC is so "amazing" is usually stated as "OMG, we can calculate the area under the integral by knowing the value of ANY of its antiderivatives at ONLY the two boundary points!"
However, I feel a bit cheated. We define the function $g(x)$ as a limit of Riemann sums that involve the entire curve. So yeah, it's going to give the entire area under the graph of $f$ from $a$ to $x$ even though we only plug $x$ into the function, but that's not to say the calculation of our function didn't involve the whole curve, you know?
After this, it follows quite directly from the mean value theorem that any two antiderivatives differ by a constant, and so we arrive at $(2)$.
Now, I hope I don't come off too sarcastic in this post, because I genuinely suspect that there is something that my feeble mind is not seeing that makes this more amazing, and that one of the more adequate thinkers that linger here can enlighten me.
This website is not the place for a nuanced history of calculus, but I will offer a short response to
So, when we interpret $g(x)$ as the function that tells us the "area so far" under the graph of $f$, I think (1) is pretty straightforward... honestly, it seems like with all the brilliant minds that came before Newton/Leibniz that this is something that should have already been clearly understood.
When Newton and Leibniz discovered this relationship between the derivative and the "area so far" several important ideas/definitions were just beginning to be clarified. Functions (in the modern sense) had not yet been invented. Finding areas of figures with curved boundaries involved limiting processes but Riemann had not yet defined Riemann sums. Archimedes' calculations were all that we knew. The idea of the rate of change of a quantity that did not vary at a constant rate was hard to work with. That involved reasoning with "infinitely small quantities".
It's a tribute to years of clarification that to you it all seems straightforward. That's a good thing - many students of calculus can go through the problem solving motions without ever grasping the idea.
Related: Why can't the second fundamental theorem of calculus be proved in just two lines?
The Fundamental Theorem of Differential Calculus allows us to formulate theorems of Differential Calculus as theorems of Integral Calculus and vice versa.
This statement clearly indicating the importance of the fundamental theorem is given in Analysis by its History by E. Hairer and G. Wanner. They derive a lot of theorems in their historical context which are necessary to obtain the fundamental theorem.
And they make an impressive job by presenting in figure 6.3 a genealogical tree of the fundamental theorem. This tree is a directed graph starting with a root node denoted - Def. of real numbers, Def. of lim, Logic - and nodes representing theorems which are necessary to derive the fundamental theorem, till we finally get as top node the fundamental theorem. This genealogical tree is worth to be studied in some detail and also to enjoy.
To get a glimpse of this tree I give a transition matrix corresponding to this tree. The rows and columns are marked by the theorem numbers. The element with index $i,j$ is a bullet if theorem $i$ is used to derive theorem $j$.
Transition matrix: \begin{align*} \begin{array}{c||cc|cccc|ccc|cccc|c|cc|c} &1.5&1.8&1.6&1.12&1.17&3.3&5.14&3.5&3.6&5.13&6.4&5.17&4.5&6.10&6.11&5.10&FT\\ \hline\hline 5.10&&&&&&&&&&&&&&&&&\bullet\\ 6.11&&&&&&&&&&&&&&&&&\bullet\\ \hline 6.10&&&&&&&&&&&&&&&\bullet&&\\ \hline 4.5&&&&&&&&&&&&&&&&\bullet&\\ 5.17&&&&&&&&&&&&&&&&&\bullet\\ 6.4&&&&&&&&&&&&&&\bullet&&&\\ 5.13&&&&&&&&&&&&&&&&&\bullet\\ \hline 3.6&&&&&&&&&&&&\bullet&\bullet&&&&\\ 3.5&&&&&&&&&&&&\bullet&&&&&\\ 5.14&&&&&&&&&&&&\bullet&&&&&\\ \hline 3.3&&&&&&&&&\bullet&&&&\bullet&&&&\\ 1.17&&&&&&&&&\bullet&&&&\bullet&&&&\\ 1.12&&&&&&&&\bullet&\bullet&&&&&&&&\\ 1.6&&&&&&&\bullet&&&&&&&&&&\\ \hline 1.8&&&&\bullet&\bullet&&&&&&&&&&&&\\ 1.5&&&&&&&&&&\bullet&&&&&&&\\ \hline Rt&\bullet&\bullet&\bullet&&&\bullet&&&&&\bullet&&&&&&\\ \end{array} \end{align*}
The horizontal lines of the matrix indicate the height levels of the nodes of the genealogical tree. The matrix shows $15$ theorems and one remark with numbers. Together with the root node and the top node the tree consists of $18$ nodes. We see $24$ bullets corresponding to $24$ directed edges of the tree which show that quite a lot of development was necessary to finally obtain the fundamental theorem.
Below is a list of the $15$ theorems plus one remark used in the genealogical tree.
-
(Rt): Def. of real numbers, Def. of lim, Logic
-
(1.5) Theorem: Consider two convergent sequences $s_n\to s$ and $v_n\to v$. Then, the sum, the product, and the quotient of the two sequences, taken term by term, converge as well, and we have \begin{align*} &\lim_{n\to\infty}\left(s_n+v_n\right)=s+v\\ &\lim_{n\to\infty}\left(s_n\cdot v_n\right)=s\cdot v\\ &\lim_{n\to\infty}\left(\frac{s_n}{v_n}\right)=\frac{s}{v}\qquad\text{if}\qquad v_n\neq 0\text{ and }v\neq 0\text{.} \end{align*}
-
(1.6) Theorem: Assume that a sequence $\{s_n\}$ converges to $s$ and that $s_n\leq B$ for all sufficiently large $n$. Then, the limit also satisfies $s \leq B$.
-
(1.8) Theorem (Cauchy 1821): A sequence $\{s_n\}$ of real numbers is convergent (with a real number as limit) if and only if it is a Cauchy sequence.
-
(1.12) Theorem: Let $X$ be a subset of $\mathbb{R}$ that is nonempty and majorized (i.e., $\exists B\ \forall x\in X\ x\leq B$). Then, there exists a real number $\xi$ such that $\xi = \sup X$.
-
(1.17) Theorem of Bolzano-Weierstrass (Weierstrass's lecture of 1874): A bounded sequence $\{s_n\}$ has at least one accumulation point.
-
(3.3) Theorem: A function $f:A\to\mathbb{R}$ is continuous at $x_0\in A$ if and only if for every sequence $\{x_n\}_{n\geq 1}$ with $x_n\in A$ we have \begin{align*} \lim_{n\to \infty} f(x_n)=f(x_0)\qquad\text{if}\qquad \lim_{n\to\infty}x_n=x_0\text{.} \end{align*}
-
(3.5) Theorem (Bolzano 1817): Let $f:[a,b]\to\mathbb{R}$ be a continuous function. If $f(a)<c$ and $f(b)>c$, then there exists $\xi \in (a,b)$ such that $f(\xi)=c$.
-
(3.6) Theorem: If $f:[a,b]\to\mathbb{R}$ is a continuous function, then it is bounded on $[a,b]$ and admits a maximum and a minimum, i.e., there exist $u\in[a,b]$ and $U\in[a,b]$ such that \begin{align*} f(u)\leq f(x)\leq f(U)\qquad\text{for all}\qquad x\in[a,b]\text{.} \end{align*}
-
(4.5) Theorem (Heine 1872): Let $A$ be a closed interval $[a,b]$ and let the function $f:A\to\mathbb{R}$ be continuous on $A$; then $f$ is uniformly continuous on $A$.
-
(5.10) Theorem: If $f:[a,b]\to\mathbb{R}$ is continuous, then it is integrable.
-
(5.13) Remark: Let $a<b<c$ and assume that $f:[a,c]\to\mathbb{R}$ is a function whose restrictions to $[a,b]$ and to $[b,c]$ are integrable. Then $f$ is integrable on $[a,c]$ and we have \begin{align*} \int_a^c f(x)\,dx=\int_{a}^b f(x)\,dx+\int_{b}^c f(x)\,dx\text{.} \end{align*}
-
(5.14) Theorem: If $f(x)$ and $g(x)$ are integrable on $[a,b]$ (with $a<b$) and if $f(x)\leq g(x)$ for all $x\in[a,b]$, then \begin{align*} \int_{a}^b f(x)\,dx\leq \int_{a}^b g(x)\,dx\text{.} \end{align*}
-
(5.17) The Mean Value Theorem (Cauchy 1821): If $f:[a,b]\to\mathbb{R}$ is a continuous function, then there exists $\xi\in[a,b]$ such that \begin{align*} \int_{a}^bf(x)\,dx=f(\xi)\cdot(b-a)\text{.} \end{align*}
-
(6.4) Theorem: If $f:(a,b)\to\mathbb{R}$ is differentiable at $x_0\in(a,b)$ and $f^{\prime}(x_0)>0$, then there exists $\delta >0$ such that \begin{align*} &f(x)>f(x_0)\qquad\text{for all }x\text{ satisfying }x_0<x<x_0+\delta\text{,}\\ &f(x)<f(x_0)\qquad\text{for all }x\text{ satisfying }x_0-\delta<x<x_0\text{.}\\ \end{align*} If the function possesses a maximum (or minimum) at $x_0$, then $f^{\prime}(x_0)=0$.
-
(6.10) Theorem (Rolle 1690): Let $f:[a,b]\to\mathbb{R}$ be continuous on $[a,b]$, differentiable on $(a,b)$, and such that $f(a)=f(b)$. Then, there exists $\xi\in (a,b)$ such that \begin{align*} f^{\prime}(\xi)=0\text{.} \end{align*}
-
(6.11) Theorem (Lagrange 1797): Let $f:[a,b]\to\mathbb{R}$ be continuous on $[a,b]$ and differentiable on $(a,b)$. Then, there exists a number $\xi \in (a,b)$ such that \begin{align*} f(b)-f(a)=f^{\prime}(\xi)(b-a)\text{.} \end{align*}
-
(FT) The Fundamental Theorem of Differential Calculus: Let $f(x)$ be a continuous function on $[a,b]$. Then, there exists a primitive $F(x)$ of $f(x)$, unique up to an additive constant, and we have \begin{align*} \color{blue}{\int_{a}^b f(x)\,dx=F(b)-F(a)}\text{.} \end{align*}