Can anyone show an example of going through Liouville's differential algebra theorem?

WARNING: This is long and layman-like, you may have a difficult time withstanding reading this if you consider yourself a seasoned mathematician.

At one point I came across Liouville's theorem of differential algebra, but I don't understand the theorem's proof and basis or really the theorem itself. I've contacted dozens people who specialize in abstract algebra and who've written about it, and to my surprise, they also don't understand either! They literally don't know, what a shame. So, I defer this challenge to the stackexchange network.

So, I'm trying to break it down in terms of what I understand, which is, not an advanced background in abstract algebra, but some experience in real and complex analysis.

$...$

I understand what a field is, you have elements in the field that are closed under $+ - / \cdot$. You can add, multiply, subtract and divide and those elements and still retain some fundamental property. The same is true of rational numbers, if you perform basic operations on rational numbers, the result is another rational number.

Okay, so far so good.

Now, we talk about rational functions, rational functions apparently form a field because you can add them, subtract them, multiply them and divide them and the result is still a rational function! Pretty nifty eh? Well, not only that, but when you differentiate and compose rational functions, the result is still a rational function!

Pretty nice properties, this is stronger than the concept of a field to include composition and differentiation, but, the anti-derivative of a rational function is not always, and usually isn't, another rational function.

$...$

1.) Now, elementary functions includes "algebraic" functions, so I don't understand how Liouville's theorem makes the leap from rational functions to general algebraic functions.

There doesn't seem to be a differential field extension that jumps from rational to algebraic. I don't exactly understand what a field extension is, but if there's one for logarithms, there ought to be one for algebraic functions. Although, maybe as part of some weirdly left-out explanation that probably should have been included, one could write some type of lemma about how any algebraic function is the solution to a polynomial equation, $P(y) = Q(x)$, and thus if given any rational function $\frac{dy}{dx} = P(x,y)/Q(x,y),$ you can always just move the $Q(x,y)$ to the left to make $Q(x,y) \frac{dy}{dx} = P(x,y)$ and maybe somehow that explains something.

2.) So, that's okay, I move on from not understanding what's going on to different articles that try to explain this concept of a "field" extension. It's almost a straight-forward concept just by its name, but the technical nuances require some deconstructing.

If a field exists over a particular set of elements, let's say the set $\mathbb{E},$ then a field extension is some kind of modification where, you add new elements to comprise the field $\mathbb{F}$, and with the addition of those new elements, it is said $\mathbb{E} \subset \mathbb{F}$ where the new $\mathbb{F}$ retains some type of abstract fundamental property. What I think this is saying is, if you add new elements, then any combination of both the original elements and those new elements are also closed under $+ \ - \ \cdot \ /$. So, if for instance you extend to logarithms, then, if you add and subtract and divide and multiply a combination of rational functions and logarithms, the result is still a combination of rational functions and logarithms.

3.) The way this is notated, is, given your ground field or starting field of rational functions $C(x),$ which I suppose represents functions rational in the variable $x$, you extend to $C(x,\ln(x)).$ I still have a little bit of confusion here, because if the field is $C(x),$ the field of rational functions, is this saying the new field extension is all functions rational in logarithms, or, $P(\ln(x))/Q(\ln(x))$? I would think, similar to notation I might see with multivariate polynomials, $C(x,\ln(x))$ is a field of functions that are rational in the variables $x$ and $\ln(x).$ An example would be $\frac{\ln(x)^2 + x - 1}{x^3 - \ln(x)}$.

Okay, I can see that, so a you can add, subtract, divide and multiply any two functions that are rational in combinations of $x,\ln(x)$, functions $a(x,\ln(x))$ and $b(x,\ln(x))$, and the result will be another function $c(x,\ln(x)) \in C(x,\ln(x)),$ or, the result will be another function that is rational in the variables $x, \ln(x).$

Then, continuously without specifying all field extensions made, articles on the subject usually go on by now to state Liouville's theorem, leaving me with many more questions than I originally started with.

$...$

4.) In looking at the structure that the theorem revolves around, that for some element $a$ in the field extension $\mathbb{F},$ that $a = v' + \sum \frac{u'}{u},$ I don't know what each individual $\frac{u'}{u}$ represents or even $v$, it just seems like a random haphazard statement at this point. I'm not saying it is and I already know it isn't, I'm just saying from a fresh outside perspective, that's how it comes across to me. Are $u$ and $v$ just any random elements in the field?

This hints at a kind of limitation in the concept of field extensions, that you can only check whether a result can be written in terms of whatever you've extended to, so if you forget to extend your field to include exponentials, you can't check whether the solution to a differential equation is also comprised of exponentials.

5.) But there's hope. Many articles on the subject don't break down the extra details of this structure, not even the wikipedia article for some reason, so I'll try to muster up what I can myself.

$e^x$ satisfies the differential equation $y' = y.$ Then, $\ln(x)$ satisfies $y' = 1/x$ or otherwise $y' = e^{-y}.$

$\ln(f(x))$ satisfies the differential equation $y'(f(x)) = \frac{f'(x)}{f(x)},$ $e^{f(x)}$ satisfies $ y'(f(x)) = y'(f(x)) \cdot f'(x).$

Well, I don't really see much on the differential relations of general algebraic equations, but maybe somehow in all the compositing and inverting that goes on, different logs and exponentials cancel out to yield various algebraic equations.

So, I look back at the statement of Liouville's theorem and start noticing this pattern that structures resembling the differential relations for $e^x$ and $\ln(x)$ come up, though not any general differential relation for algebraic functions.

I think maybe another haphazard connection that, behind the scenes, when you're actually working with this theorem, there might be a bit of Partial Fraction Decomposition going on that can separate a function into the structure in the theorem, separating some given expression into various $u'/u + v$ components.

6.) Another point is that, instead of saying "derivative" as I've often heard my professors say for however many years, articles sometimes specify this "derivation" operation.

What I understand this to be now is straight-forwarly, an abstraction of the differential operator that distills it down to a couple of its fundamental properties to study it, being linearity $\partial(c_1u+c_2v) = c_1\partial u + c_2\partial v,$ for constants $c_1, c_2 \in \mathbb{R},$ and the product rule $\partial (u \cdot v) = v \partial u + u\partial v.$ Why does this not also include the chain rule? I don't know, I don't make a fuss about it.

I move on to making the connection that somewhere along the way, it's important that elementary functions are closed under differentiation, and that a differential field is just like a field, except with the added condition that its elements are closed also under $\partial,$ for a grand total of closure under $+, -, \cdot, /, \partial.$

For whatever reason however, even though $+$ has an inverse and $ \cdot$ has an inverse operation in a field, $\partial$ does not always have an inverse operation that maps to the field, presumably because, well, a field wasn't originally defined with differentiation in mind so there's bound to be those instances. I'm sure you could define a new field-like mathematical object containing elements that are closed under $+, -, \cdot, /, \partial, \int.$

I don't see a proof that elementary functions are closed under differentiation, I don't think it would be that hard to prove itself, but a lemma would require defining what an elementary function even is in abstract terms, and it's possible some technical definition of an elementary function is precisely that it is a solution to the differential structure stated in Liouville's theorem, but I don't know for sure.

$...$

Can anyone explain how to understand this theorem and provide a couple examples of actually working with it, starting with a function, creating field extensions, and then concluding its integral is or isn't elementary?

EDIT 1: Now, I don't understand why Liouville's theorem itself is as general as it is, but in looking through stack exchange and other papers, I think in practice when imploring the theorem, one creates specific field extensions for specific algebraic operations, such as $C(x,\sqrt{x},\sqrt[3]{x},...)$ and so on. But, I don't know for sure, there might be a shortcut where you can implicitly extend to all algebraic solutions using combinations of polynomials.


Fields and their extensions

You already seem to understand fields pretty well. A field extension is just one field included inside another. For example,

  1. $\mathbb{Q}\subseteq\mathbb{Q}$ (not a typo),
  2. $\mathbb{R}\subseteq\mathbb{C}$,
  3. $\{\text{constants}\}\subseteq\{\text{rational functions in }x\}$,
  4. $\{\text{rational functions in }x\}\subseteq\{\text{rational functions in }x,y\}$,
  5. $\mathbb{C}\subseteq\{\text{ratios of formal power series in }t\text{ with complex coefficients}\}$, and
  6. $\mathbb{Q}\subseteq\{\text{the surreal numbers}\}$

are all field extensions.

Given a pre-existing field extension $E\subseteq F$, we are often interested in the minimal extension necessary to include some $f\in F$; this is written $E(f)$. It's probably not obvious that $E(f)$ exists or should be unique (up to isomorphism); typical undergraduate abstract algebra sequences spend some time proving it. One can also throw in multiple (or infinitely-many) elements of $F$; we define $E(f_1,f_2,\dots)$ to be the smallest field extending $E$ and containing all the $\{f_j\}_{j\in J}$.

Notice that, if we can apply some operation to elements of $E$ to get $f$, $E(f)$ need not be closed under that operation. For example, $\sqrt{2+\sqrt{2}}\notin\mathbb{Q}(\sqrt{2})$; we have not included "all possible square roots". More importantly, $e^{x^2}\notin\mathbb{C}(x,e^x)$.

With this notation, I can now rewrite example (3-4) above as $K\subseteq K(x)$ and $K(x)\subseteq K(x,y)$, respectively. (The conventional notation for example (5) is similar: $\mathbb{C}\subseteq\mathbb{C}(\!(t)\!)$.) We can also answer your question above:

I still have a little bit of confusion here, because if the [base] field is C(x), the field of rational functions, is this saying [$C(x,\ln{\!(x)})$] is all functions rational in logarithms, or, $P(\ln{\!(x)})/Q(\ln{\!(x)})$? I would think, similar to notation I might see with multivariate polynomials, $C(x,\ln{\!(x)})$ is a field of functions that are rational in the variables $x$ and $\ln{\!(x)}$

You are correct; $C(x,\ln{\!(x)})$ is the latter. $C(x,\ln{\!(x)})$ is the smallest field that extends $C$ and contains both $x$ and $\ln{\!(x)}$. The former option does not contain $x$ (it is instead $C(\ln{\!(x)})$).

Obviously, the concept of field extensions supports way more generality than we'll ever need (the surreal numbers, really?), so we try to classify field extensions into two kinds:

  • Transcendental field extensions occur when $f$ does not "cancel" in any way; $E(f)$ is an infinite-dimensional $E$-vector space. The quintessential example is $\mathbb{Q}\subseteq\mathbb{Q}(\pi)$.
  • Algebraic field extensions occur when $f$ solves a polynomial from $E$; $E(f)$ is a finite-dimensional $E$-vector space. The quintessential example is $\mathbb{Q}\subseteq\mathbb{Q}(\sqrt{2})$.

Derivatives and derivations

You already understand the basics of derivations, but seem to be missing the generality Liouville's theorem is going for. Notice that if our field is $\mathbb{C}(x,y)$, then we have infinitely many possible derivations to choose from: every possible directional derivative $$a\frac{\partial}{\partial x}+b\frac{\partial}{\partial y}$$ is a derivation. If you hang around mathematical physicists long enough, you'll learn that they assume every differential equation is just written down in the wrong coordinates; if you're willing to use very weird derivations, it's becomes something nice like the exponential ODE. That's why we switch to talking about a derivation, and not just the derivative.

More generally, one can have derivations that do not arise from derivatives at all: for any functions $f(x),g(x)\in\mathbb{C}(x)$, the function $\delta:\mathbb{C}(x)\to\mathbb{C}(x)$; $$\delta(h(x))=g(x)h'(f(x))$$ is a derivation. Thus, for example, we can have $\delta(x)=x^2$ quite easily. Likewise, there is a derivation $\delta$ on $\mathbb{C}$ that sends rational numbers to $0$, but $\delta(\pi)=1$. (To see this, note that $\mathbb{Q}(\pi)$ is just $\mathbb{Q}(x)$, evaluated at $\pi$; then extend each derivation on $\mathbb{Q}(\pi)$ to $\mathbb{C}$.)

Now I'll turn to your question (5):

Why does this not also include the chain rule?

Recall the examples of field extensions up above! Only cases 3-4 have an obvious notion of function composition. So for many cases, the chain rule doesn't even make sense.

Finally, fixing a derivation $\delta$ allows us to separate out two more useful sorts of transcendental field extensions:

  • $E\subseteq E(l)$ is logarithmic if there exists $p\in E$ such that $\delta(p)=\delta(l)\cdot p$. The intuition pump is $\mathbb{C}(x)\subseteq\mathbb{C}(x,\ln{\!(x)})$, for which $p=x$.
  • $E\subseteq E(e)$ is exponential if there exists $p\in E$ such that $\delta(e)=e\cdot p$. The intuition pump is $\mathbb{C}(x)\subseteq\mathbb{C}(x,e^x)$, for which $p=1$.

Elementary Functions

I don't see a proof that elementary functions are closed under differentiation…a lemma would require defining what an elementary function even is in abstract terms

Indeed! To apply Liouville's theorem, you start with a notion of elementary functions: a field extension $E\subseteq F$, with a derivation $\delta$ on $F$ that sends $E$ to $E$. For example, your field extension could be $$\mathbb{C}(x)\subseteq\mathbb{C}(x,\ln{\!(x)},e^x,e^{x^2})$$ with the usual derivative. This is "declaring" to Liouville's theorem that anything else (say, $e^{\sqrt{x}}$), you do not consider elementary. The elementary functions are closed under differentiation, because $\delta$ has to send $E$ to $E$ and $F$ to $F$.

I think in practice when imploring the theorem, one creates specific field extensions for specific algebraic operations, such as $C(x,\sqrt{x},\sqrt[3]{x},\dots)$ and so on. But I don't know for sure, there might be a shortcut where you can implicitly extend to all algebraic solutions using combinations of polynomials.

Not quite. Liouville's theorem requires the field extension to be elementary; that is, a finite sequence of nested algebraic, logarithmic, or exponential extensions.

If you're looking to find an antiderivative, Liouville's theorem cannot do this for you; it sometimes gives "false negatives". The traditional example is to declare only rationals to be elementary: $\mathbb{C}(x)\subseteq\mathbb{C}(x)$. Then $x^{-1}$ has no elementary antiderivative.

But Liouville's theorem does give you some information to deduce when no such antiderivative exists.

Now, elementary functions includes "algebraic" functions, so I don't understand how Liouville's theorem makes the leap from rational functions to general algebraic functions.

It doesn't. For every elementary antiderivative, there is some elementary field extension that contains the antiderivative. But the field extension changes based on which antiderivative you are looking at; the field extension containing all possible antiderivatives is (almost always) not elementary. For example, fix $c\in\mathbb{C}$ and consider $(x-c)^{-1}\in\mathbb{C}(x)$. The antiderivative of $(x-c)^{-1}$ is $\ln{\!(x-c)}$; the smallest field containing antiderivatives for every element of $\mathbb{C}(x)$ must contain $\mathbb{C}(x,\{\ln{\!(x-c)}\}_{c\in\mathbb{C}})$, which is already (uncountably) infinitely many steps away from $\mathbb{C}(x)$.

Using Liouville's Theorem

I don't know what each individual $\frac{u'}{u}$ represents or even $v$, it just seems like a random haphazard statement at this point.

$v$ represents the terms in the antiderivative that you can represent within $E$. For example, when integrating $\frac{x^2-x+1}{x(1-x)^2}\in\mathbb{C}(x)$, the result includes $(1-x)^{-1}\in\mathbb{C}(x)$; this would be $v$. $u$ represents a term for which you must introduce the logarithm of an element of $E$; in the same integration, $u=\ln{(x)}$.

This hints at a kind of limitation in the concept of field extensions, that you can only check whether a result can be written in terms of whatever you've extended to, so if you forget to extend your field to include exponentials, you can't check whether the solution to a differential equation is also comprised of exponentials.

No, the exact opposite is true! Liouville's theorem says that "integration does not introduce exponentials": it can only introduce logarithms. After you've removed $v$ (stuff from $E$) and $u$ (logarithms), there's nothing left!

Can anyone explain how to understand this theorem and provide a couple examples of actually working with it, starting with a function, creating field extensions, and then concluding its integral is or isn't elementary?

Ted cites this wonderful answer (scroll down to "For future references here is a complete re-transcript…"); rather than try and improve on it, I will direct your upvotes (if any) to there and Ted's comment.