when does a separate-variable series solution exist for a PDE

I am wondering if there are some conclusions as to when a series solution using the separate variable method to a PDE exists; i.e. for what requirements on the PDE, what requirements on the initial and boundary conditions so that one can assume the solution as products of terms involving only a single dependent variable, and then proceed to solve the equation in Fourier's way.


Solution 1:

For starters, the equation should be linear (if you want a term-by-term series solution).

For geometric partial differential equations, separation of variables is often tied intimately to the symmetry properties of the underlying (pseudo-)Riemannian metric. To be more precise, consider the ultrahyperbolic/wave/Laplace equation $$ \Box_g \phi = 0 $$ on some pseudo-Riemannian manifold $(M,g)$. As it turns out, the separation of variables for this equation is closely related to the separability of the Hamilton-Jacobi equations associated to the geodesic flow. For the latter, you need to have enough "conserved quantities" as the degree of freedom.

For an $n$-dimensional manifold, in general you will need to have $n$ independent conserved quantities. For example, you can perform a separation of variable where $M$ is Euclidean space using the rectangular coordinate system because of the conservation of energy-momentum, which gives $n$ conserved scalar quantities. For the linear wave equation, you can also perform a separation of variables in spherical coordinates: for this coordinate system you need the conserved quantities total energy and angular momenta, which adds up to, again, $n$ conserved quantities.

(For Hamilton-Jacobi type equations [and similarly ordinary differential equations of dynamical systems] the method of separation of variables [the finding of action-angle coordinates] is fairly well-developed. You should probably consult textbooks in those subjects for treatments on what is known.)

While the rule of thumb, as listed above for geometric equations, is to try to find $n$ conserved quantities, in practice, there are also cases where fewer conserved scalar quantities are available, yet a separation of variables can proceed. This is sometimes called "hidden symmetry". A prime example is the wave equation (and geodesic equations) on the Kerr-Newman space-times of general relativity. The geometry only admits two obvious symmetries: time translation and rotation about the $z$ axis. What saves the day is the rather mysterious "Carter tensor" which gives rise to a higher-order symmetry.

For more details in some of the above, you may want to consult this article and the references therein. (Or this.)

As in the case of most facts about partial differential equations, there isn't a well developed one-size-fits-all theory. But if you search, say, MathSciNet, for the phrase "separation of variables", you'll see that it is a field of active research. A large part of the established practice, however, can be summarised as the search for continuous groups of symmetries for solutions of differential equations. A standard reference in that direction is P. Olver's Applications of Lie Groups to Differential Equations.

Lastly, one thing I am somewhat fond of whenever talking about separation of variables is the following paper by Eisenhart, "Enumeration of potentials for which one-particle Schrodinger equations are separable," Phys. Rev. 74, 87-89.