To what extent is linear stability analysis of numerical methods relevant to nonlinear ODEs?

Some context.

Consider a numerical method $y_{n+1} = \Psi(y_n, h)$ for solving ODEs, where $h$ is the step size.

The following definition of linear stability is ubiquitous in the introductory numerical analysis literature on numerical ODEs (see Definition 8 in the these notes):

Suppose $y' = \lambda y$ for some $\lambda\in\mathbb C$. Then the numerical method $\Psi$ is linearly stable if $y_n\to 0$ as $n\to\infty$.

As a numerical methods novice, I wonder to what extent this definition is relevant to non-linear ODEs given that it is formulated in terms of a linear model problem. Later in the same notes, the following statement appears and seems to address this:

If a numerical method is stable in the above sense for a certain range of values of $\lambda$, then it is possible to show that it will be stable for the ODE $y' = f(t,y)$ as long as $\frac{\partial f}{\partial y}$ is in that range of $λ$ (and $f$ is smooth enough). We won’t prove this theorem here.

This statement has intuitive appeal since $\partial f/\partial y$ at a given point determines the behavior of the linearized equation, and one might imagine that applying stability analysis to the linearized equation at a point would give relevant information about the performance of the numerical method near that point, but I'm left wanting a more detailed discussion of this and proof.

Questions.

  1. Where can I find a more detailed discussion of that second quoted statement?
  2. In general, is linear stability analysis considered relevant to nonlinear systems because one can prove statements similar to that second quoted statement to the effect of "if the numerical method is stable for the linearized equation at every point, then it will be stable for the full, non-linear equation?"

References appreciated.


The ideas that you need to study are nicely discussed here https://wjrider.wordpress.com/2016/05/20/the-lax-equivalence-theorem-its-importance-and-limitations/ in an informal way, with reference to the original sources for the Lax and Dahlquist equivalence theorems (which should clarify your second quote). As a practitioner, I always tell myself that I understand the relevance of linear analysis through the thought that "on a fine-enough mesh all problems are locally linear", just as you suggest, although the proof makes some hedging remarks about things being "sufficiently smooth". Both principles are sufficiently reliable in practice that they never seem to be questioned. In the PDE case, consistency is defined in the case of nonlinear conservation laws, through a third principle, the Lax-Wendroff theorem, although this guarantees convergence only to a weak solution, and not to a physical solution. An additional requirement is the satisfaction of an entropy principle (see e.g. LeVeque, Numerical Methods for Conservation Laws, Birkhauser). I am aware that entropy principles also apply to nonlinear ODEs but that is not my area. The journal Entropy might be a useful resource.

Note that stability merely asserts that the calculation does not blow up, and that it does converge to something finite. Consistency is the property that if it does converge then it converges to the correct answer.