Why do PDE's seem so unnatural? [closed]

First let me preface by saying that I'm highly aware of the fact that plenty a math topic seems unnatural upon first learning. But PDEs seem to have a special place in my "unnatural" category of mathematics. Specifically because I'm comfortabe with pretty much everything else, even when I'm lost. In PDE texts I frequently encounter statements by authors such as "but if we demand system X has Y property", "if we assume the solution is of the form TX", etc. All backwards statements compared to say, real analysis.

It seems to me like one of the Bernoulli's came up with separation of variables for a way to get a solution and nobody ever tried anything else, they just kept pushing eigenfunction expansion until there's no way to back out. It's like we found an alien spacecraft and found some parts that do stuff but at the end of the day we don't know what the inspiration for the design was.

Is there another method of handling PDEs that seems more "natural" (in the sense that number theory seems natural) as opposed to separation of variables, eigenfunction expansion, green's functions? There's no way I'm the first person to question the current trajectory of PDEs.

Note: I do find some of the math to be pretty cool in PDE's, in a problem solving sense. And eigenfunction expansions and fourier series are cool in their own right. I like Green's functions too. I just can't get over the fact that we're forcing the math to work and then back-justifying everything. Like the Dirac Delta, for example.

Rant over. Can anybody lead me to the light, or atleast tell me to stop whining in a motivational way? Thank you.


The big problem with PDEs that makes them so difficult is geometry. ODEs are fairly natural because we only have to consider a few cases when it comes to the geometry of the set and the known information about it. We can find general solutions to many (linear) ODEs because of this, and so there's usually a natural progression to get there.

PDEs on the other hand, have at least 2-dimensional independent variables, so the variety in the kinds of domains is increased from just intervals to any reasonable connected domain. This means that initial and boundary values contain a lot more information about the solution, so a general solution would have to take into account all possible geometries. This is not really possible in any meaningful way, so there usually aren't general solutions.

When we do pick a geometry, it often simplifies the problem significantly. One nice domain is $\mathbb{R}^n$. Many simple PDEs have invariance properties, which means that if we're given enough space to "shift" and "scale" parts of the equation, we can probably come to reason about what the solution should look like. For these situations, there may be general solutions (see PDEs on unbounded domains). These solutions are also more of the straightforward kinds of solutions we see in ODEs.

Many PDEs and ODEs simply don't have closed form solutions, and so usually rely on series methods and other roundabout ways to write solutions which don't really "look" like solutions.

Separation of variables is a kind of reasonable guess that the effect of each independent variable should be independent in some way. We can try writing the solution as a sum or a product or some other combination of independent functions of each independent variable, and this often reduces the problem in some way which allows up to separate the PDE into a series of ODEs. We don't know that this will work in every case, but if we can show uniqueness of a solution, then finding any kind of solution means we found the solution to the problem.

The last main reason is that the theory of PDEs is way harder than the theory of ODEs. So, when you're first learning to solve ODEs, you can be introduced to these methods with a bit of theory and some background on why each of the guesses and techniques makes some sense. When first learning to solve PDEs, however, you probably will not have anywhere near the amount of background you need to fully understand the problems. You can be taught the methods, but they will always seem like a random guess or just a technique that happens to work, until you learn about the theory behind it. As Eric Towers mentions, some Lie algebra would be a good place to start, and I would also recommend PDE books with a more theoretical slant to them, such as Lawrence Evans' text. Since you seem to have some background in real analysis (and so presumably some basic modern/abstract algebra), I think both of these paths should be achievable at your level.


I would strongly recommend learning about Lie symmetry analysis of differential equations.

  • Even in ODEs, the sentence structure you describe is common. "If $N_x = M_y$, then the equation is exact and we can..."
  • All the techniques from ODEs and all the techniques you mention for PDEs can be expressed in this one framework.
  • Computer technology is up to the task of computing the (algebraically horrendous) prolongations needed to calculate anything. This was definitely not so in Lie's time.

Starting places (Links are to the publisher, you can find these other places):

  • Bluman and Kumei, Symmetries and Differential Equations
  • Olver, Applications of Lie Groups to Differential Equations

Additionally, if you get your head wrapped around these ideas, you will have a head start understanding Galois theory. (Groups of symmetries holding the set of solutions of a differential frame invariant are remarkably analogous to groups of symmetries holding the set of roots of a polynomial invariant.)