What is the motivation for analytic solutions in Mathematical Physics?

I am trying to understand why one cares about solving PDE's with an analytic/theoretical solution when one can use numerical methods?

If you tell me, "only mathematicians try to find theoretical solutions and understand them", I can live with that, after all that is part of what mathematics is all about. But it seems that physicists and engineers also care about theoretical solutions. What is their motivation?

To expand my question, consider any application involving Bessel functions. Even the simplest PDE will lead to some nasty series. When one has an actual PDE, or something real, it will be a lot messier, so I doubt that anyone building something specific based on the Bessel series will work with it analytically.

What is to be gained by solving the PDE for a vibrating membrane? Is it because the theoretical solutions imply certain physical laws that govern the process? Perhaps this is what the numerical approach is missing.


There are a number of reasons I can think of:

  1. An exact solution in terms of special functions allows you to work from tables of these functions, so you only need to have calculations based on a limited set of common functions. Of course, this is less relevant these days with these new-fangled steam-calculator computer things on everyone's desks, but Abramowitz and Stegun is half special function tables for a reason.

  2. Structure. Given explicit solutions, one is far more able to examine the wider properties of the solution. In particular, suppose I have an equation with a parameter in it. How do I study what happens to the solution as the parameter varies, if I don't have special function solutions? How do you know you're seeing all the behaviour?

  3. Wider validity. What if my numerical algorithm doesn't converge? If I have a series, it may be possible to transform it so that it converges much more quickly, or indeed, converges at all. This is why theta functions are so useful: the convergence of the normal series may be slow, but applying Jacobi's imaginary transformation will make the convergence (and hence the calculation much shorter). How would I know that without special function properties?

  4. How do I actually know my calculation is right? Or: How do I know artefacts in my numerical calculation are a result of what I have done, rather than what the function actually does? See the Wilbraham–Gibbs phenomenon in Fourier series: it was (possibly...) first discovered by a chap using a numerical integrator, and seeing these extraneous oscillations near discontinuities.

Oh, and not forgetting the mathematical physicist's answer:

  1. Cos it's cool, dammit!

What's the point of painting when we have cameras? To ask such a question is to misunderstand the point of painting. The point of painting isn't to accurately represent reality, it's to create beautiful images, which is a different (but related) problem.

Similarly, you're misunderstanding the point of physics. The point of physics is to understand how the world works. This is a more general quest than simply finding out methods to make numerical predictions about the world.

To write down the formula governing a physical process is, in and of itself, knowledge. I don't mean that the numbers derived from that formula are knowledge, so that the formula is a tool to obtain knowledge, I mean that knowing that the process is governed by that formula is its own piece of knowledge.

When we say we want "an analytic expression", we mean we want an expression in terms of functions we already know. We could very well just define a new transcendental function as the solution to our equation, which mathematicians often do, but the "we already know" is key, because to express a function in terms of known functions is to precisely describe the relation of one phenomenon to others. To say that the tension in a circuit is equal to $\sin(t)$ is not just to say "I have a method for making numerical predcitions about this circuit", it's to say that there is a profound relation between the tension of a circuit, and uniform circular motion. This is a beautiful and worthwhile fact to know, separately from its potential as a tool for generating numbers.


I'd like to elaborate on one of the points that has already been made. My point here actually does not pertain to problems from physics where we actually get an explicit solution. Instead I would like to justify why you would want to use PDE theory in a problem which you can't solve explicitly.

One of the first things that "real" PDE theory teaches us is that the statement of a PDE does not always tell us what type of solution we should expect to the equation. Specifically, there might not be a classical solution, which means a function with all the derivatives in the equation which satisfies the equation.

In many real problems, ranging from simple problems like linear transport to complex problems like Hamilton-Jacobi-Bellman equations, these often simply do not exist. This can happen artificially, such as if we use a discontinuous initial condition, or it can be thrust upon us in various ways. In either case, even when we are looking for a numerical solution, we should really have some idea of what type of thing we're looking for. It is hard to know this without having a weak formulation in front of you.

Here are some basic questions you might ask about the true solution. Is it continuous? How many derivatives does it have? A method assuming more regularity than you have will blow up.

In a problem with both time and space, can we be sure that the solution remains continuous in space if the initial condition was continuous in space? In hyperbolic problems, we often cannot, and so we have to manage shocks. A standard example is a very simple model of traffic flow: $u_t+(1-u)u_x=0$ (where $0 \leq u(0,x) \leq 1$). A numerical method which was not carefully designed to accommodate shocks will give nonsensical results, specifically non-physical shock velocities, when it encounters a shock.

This is just one of many things that one can take from PDE theory and apply when ultimately trying to solve using numerical methods.


The point of PDE theory is not to find solutions to individual differential equations, any more than the point of real analysis is to compute integrals. Here are two more specific examples:

1) What sorts of conditions can we impose to ensure that the solution is actually smooth, rather than merely having derivatives up to $k$th order? That isn't something that can be observed from numeric solutions, and it's a tricky problem in general. On the other hand, elliptic regularity gives convenient conditions for smoothness with mild assumptions. One of the most common elliptic operators is the Laplacian, and so there are various applications in mathematical physics.

2) De Rham cohomology on a smooth (oriented, closed) manifold reflects its topological properties, despite simply being a system of differential equations. The dimension of the solution space has huge topological significance, but writing down those exact solutions is not particularly useful (or feasible) in most cases. There are also many generalizations of this idea, e.g., the Atiyah-Singer index theorem.