Can derivatives be defined as anti-integrals?
I see integrals defined as anti-derivatives but for some reason I haven't come across the reverse. Both seem equally implied by the fundamental theorem of calculus.
This emerged as a sticking point in this question.
Let $f(x)=0$ for all real $x$.
Here is one anti-integral for $f$:
$$ g(x) = \begin{cases} x &\text{when }x\in\mathbb Z \\ 0 & \text{otherwise} \end{cases} $$ in the sense that $\int_a^b g(x)\,dx = f(b)-f(a)$ for all $a,b$.
How do you explain that the slope of $f$ at $x=5$ is not $g(5)=5$?
The idea works better if we restrict all the functions we ever look at to "sufficiently nice" ones -- for example, we could insist that everything is real analytic.
Merely looking for a continuous anti-integral wouldn't suffice to recover the usual concept of derivative, because then something like $$ x \mapsto \begin{cases} 0 & \text{when }x=0 \\ x^2\sin(1/x) & \text{otherwise} \end{cases} $$ wouldn't have a derivative on $\mathbb R$ (which is does by the usual definition).
In a sense your question is very natural. Let's take an informal approach to it, and then see where the technicalities arise. (That's how a lot of research mathematics works, by the way! Have an intuitive idea, and then try to implement it carefully. The devil is always in the details.)
So, one way to tell the familiar story of one-variable calculus is as follows:
- Define the derivative $f'$ of a function $f$ as the limit of the difference quotient, $h^{-1}(f(x+h)-f(x))$, as $h\to0$.
- Define an anti-derivative of a function $f$ as a function $F$ for which $F'=f$.
- Define the definite integral of a function $f$ over $[a,b]$, say as the limit of Riemann sums.
- Discover that (2) and (3) are related, in the sense that $$\int_a^bf=F(b)-F(a)$$ so long as $F$ is any anti-derivative of $f$.
Now, your idea is that you can imagine doing this the other way around, as follows:
- Define the definite integral of a function $f$ over an interval $[a,b]$, say as a limit of Riemann sums.
- Define an anti-integral of a function $F$ as a function $f$ for which $$F(x)-F(0)=\int_0^xf$$
- Define the derivative of a function, as the limit of the difference quotient.
- Discover that (2) and (3) are related, in the sense that one anti-integral of $f$ is just $f'$, so long as $f'$ is defined.
The trouble in both stories arises in steps 2 and 4. In both versions, step 4 is a form of the Fundamental Theorem.
The Problem with Step 2
In both the standard and the flipped story, step 2 poses existence and uniqueness problems.
In the standard story, an anti-derivative of $f$ may not even exist; one sufficient condition is to require that $f$ be continuous, but that is not necessary. And even if you do require that $f$ be continuous, you're always going to have non-uniqueness. Thus "anti-differentiation" construed as an operation is not really a bona fide "inverse" operation, because it is not single-valued. Or in other words, differentiation is not injective: it identifies many different functions. (Exactly which functions it identifies depends on the topology of the domain they're defined on.)
In the flipped story, again note that we certainly will never have uniqueness. Given any anti-integral $f$, you can find infinitely many others by changing the values of $f$ at a set of measure zero. We also aren't guaranteed existence of an anti-integral for a given $F$, and this time not even the continuity of $F$ will serve as a sufficient condition. What we need is even stronger, "absolute continuity."
The Problem with Step 4
In the standard story, the catch is in "so long as $F$ is any anti-derivative of $f$." The problem is that not every Riemann integrable function has an anti-derivative. If we want to guarantee an anti-derivative, we can impose the additional hypothesis that $f$ is continuous (which is again sufficient but not necessary).
A similar problem arises in the flipped scenario: given an arbitrary $f$, it might not have an anti-integral. The fundamental theorem for Lebesgue integrals shows that it's both necessary and sufficient to require that $f$ be absolutely continuous, at least when we work with the Lebesgue definite integral instead of the Riemann definite integral. But given the fact that integrals are not sensitive to values on a set of measure zero, the best conclusion we can draw in that case is that an anti-integral of $f$ equals $f'$ "almost everywhere" (meaning, everywhere except at a set of measure zero).
The Upshot
Note that even in the familiar story, we don't define integrals as anti-derivatives. Thus you should not expect we could define derivatives as anti-integrals. The essential obstruction to this sort of definition is existence and uniqueness.
In both scenarios, we first specify the seemingly unrelated limit-based definitions of derivatives and definite integrals. We then discover a relationship about how anti-derivatives are related to integrals (the standard story) or how anti-integrals are related to derivatives (the flipped story), assuming enough regularity of the functions involved to resolve the existence and uniqueness problems.
From the point of view of analysis (as hinted at in Henning Makholm's answer) the issue is that the mapping $I:f'\to f$ is extremely not one-to-one. When you try to invert it, you find that a great deal of functions are possible "anti-integrals" of a given function. While this does occur for $d:f\to f'$ as well, there is a robust mathematical theory about how to address this and how to describe the set of anti-derivatives of a given function. For example, if $f$ is defined on $[a,b]$ then all antiderivatives of $f$ are of the form $$F_i(x)=c_i + \int_a^x f(t)dt$$ for constants $c_i$. Although in some contexts the situation becomes more complicated (for example, if we look at $1/x$ defined on $[-1,0)\cup(0,1]$ then you have two constants, one for each side) there is a whole field that studies what happens for various domains.
The situation for inverting $I$ is a lot less rosy. For one thing, if you take any finite subset of the domain you can move the function's value around however you like without changing the value. More generally, as long as two functions disagree on a set of measure zero they will have the same integral. As far as I know there are no known ways to fruitfully analyze such a set of functions (a statement that has deep repercussions in machine learning and functional analysis).
A second issue is that integrating doesn't always ensure that you can differentiate. There are a wide variety of functions $f$ such that the anti-integral doesn't (or doesn't have to) produce a differentiable function! For example, if $1_\mathbb{Q}$ denotes the function that takes on the value $1$ on rational inputs and $0$ on irrational inputs, this function has a Lebesgue integral of $0$ (a similar example works for Riemann integral but it's more work). If you take the anti-integral of $f(x)=0$ and get $1_\mathbb{Q}$, you can't differentiate and get back $f(x)=0$ because it's not differentiable.
A commenter mentions vector calculus, and it is true that something like this happens in vector calculus but there are a couple massive caveats.
Weak derivatives.
This is essentially the way one defines a weak derivative. If a function is not differentiable in the traditional sense, but it is integrable, then one may define a weaker notion of derivative through duality: the derivative of $f$ is the function $f'$ such that $$ \int f' u=-\int f u' $$ for all smooth functions $u$. One can prove that the function $f'$ is in fact $L^p$-unique. If $f$ is differentiable in the standard sense, then it is also differentiable in the weak sense, and both derivatives agree.
For example, the Dirichlet function is nowhere continuous, let alone differentiable. But its weak derivative exists, and is in fact the zero function. Indeed, $$ 0=\int 1_{\mathbb Q} u'=-\int 1'_{\mathbb Q} u $$ implies that $1'_{\mathbb Q}=0$ almost everywhere.