Theorem: Anti-differentiation is harder than differentiation
The question of why anti-differentiation is "harder" than differentiation was the topic of an earlier question, and some of the answers are interesting, but I'm not sure they fully answer it, and this question will not be exactly the same. Someone pointed out that in numerical work, integration is easy and differentiation is hard, and someone else spoke of local versus global.
There ought to be an answer in the context of differential algebra rather than analysis, so the local-versus-global issue and the numerical issues won't be there to complicate the question.
Can the statement that one is easier than the other be made into a precisely stateable and provable theorem? If so, how?
Solution 1:
We are talking here about function terms. The set of such terms (the "elementary functions") can be defined in a recursive way: $1$ and $x$ are function terms, and if $A$, $B$ are terms then $A+B$, $\lambda A$, $A*B$, $e^A$ and a certain number of similar expressions are again function terms. Denote by ${\cal E}$ the set of terms established in this way. It is easy to prove that formal differentiation $D={d\over dx}$ is a map $D:\ {\cal E}\to{\cal E}$. Now the question arises whether this map is also surjective. We all know that it is not, but only a few of us have seen a proof of this, the reason being that this is an impossibility proof; cf. the proof that it is impossible to trisect an angle with ruler and compass. We have to show that there are instances $F$ of the "integration problem" that all finite length constructions of the theory at hand (substitution, partial integration, etc.) cannot solve. For such a proof we need a theory that transcends our bag of tricks, e.g. some sort of "differential Galois theory".