It's difficult to know what you find interesting without further information. But here's an application from partial differential equations, anyway.

Many undergraduates learn how to solve the initial value problem $$ \begin{cases} u_t + u_{xxx} = 0 \qquad x,t\in\mathbb{R}\\ u(x,0) = u_0(x). \end{cases} $$ Using the Fourier transform, the solution is written $$ u(x,t) = U(t)u_0(x) = [e^{it\xi^3}\hat{u}_0]^\vee(x). $$ A basic calculus class will usually provide a description of the solution procedure, side stepping convergence issues. What are these issues?

For concreteness, suppose $u_0 \in L^2(\mathbb{R})$. Since $|e^{it\xi^3}|=1$, two applications of Parseval's theorem shows $$ \|U(t)u_0\|_2 = \|e^{it\xi^3}\hat{u}_0\|_2 = \|\hat{u}_0\|_2 = \|u_0\|_2. $$ That is, if the initial data lies in $L^2$, then at each time $t$, the solution (considered as a function in $x$) also lies in $L^2$. The operator $U(t)$ simply pushes the data around the closed ball in $L^2$ of radius $\|u_0\|_2$; the $L^2$ norm of the solution is conserved.

That's nice, but what happens as $t\rightarrow0$ with the solution? Do you actually recover the initial data? One would hope! What do you mean by recover? Ah, a convergence issue!

In the $L^2$-sense, the above calculation will show that $U(t+h)u_0 \rightarrow U(t)u_0$ as $h\rightarrow0$. In particular, this will hold for $t=0$ and we simply note that $U(0)u_0=u_0$. Here are some details: \begin{align*} \|U(t+h)u_0 - U(t)u_0\|_2 &= \|e^{i(t+h)\xi^3}\hat{u}_0 - e^{it\xi^3}\hat{u}_0\|_2 \\ &= \|e^{it\xi^3}[e^{ih\xi^3}-1]\hat{u}_0\|_2 \\ &= \|[e^{ih\xi^3}-1]\hat{u}_0\|_2. \end{align*} Now apply the Lebesgue Dominated Convergence Theorem to the final line to reach the conclusion. This result is abbreviated by saying $$ u(t) \in C(\mathbb{R} : L^2(\mathbb{R})), $$ which means the solution to the IVP describes a continuous curve $t \mapsto u(t)$ in $L^2$, where we have suppressed the spatial variable.

Two comments:

  • We can replace $L^2$ with the Sobolev spaces $H^s$ above.
  • By as standard theorem in analysis, we can extract a sequence $t_k \rightarrow 0$ so that $U(t_k)u_0 \rightarrow u_0$ point wise a.e. in $x$. But do we generally get pointwise convergence? Here's an answer for the Schödinger equation, where the full story is still actively researched.

One of the main advantages of $L^p$ spaces (for $1<p<\infty$) over the more "traditional" spaces like $C^k([a,b])$ is that the $L^p$ spaces are reflexive, i.e. the double dual $(L^p)''$ is (with an obvious identification) again given by $L^p$.

This implies many useful results, one of which is that for every bounded sequence $(f_n)_n$ in $L^p$, we can extract a subsequence $(f_{n_k})_k$ that converges weakly to some $f \in L^p$. This means that $\varphi(f_{n_k}) \rightarrow \varphi(f)$ for all bounded linear functionals $\varphi$ on $L^p$. By duality theory for $L^p$, these are always given by integration against some function $g_\varphi \in L^{p'}$, where $p'$ is the conjugate exponent determined by $\frac{1}{p} + \frac{1}{p'} = 1$.

This result is useful for many results, for example in the calculus of variations.

Here, one generally considers some (usually nonlinear) functional $\Phi : L^p \rightarrow \Bbb{R}$ and wants to show the existence of a minimizer in some class like $\Gamma := \{f \in L^p \mid \int_0^1 f dx = 0\}$.

[Sidenote: Usually, one considers the Sobolev spaces $W^{k,p}$ instead of the Lebesgue spaces, but these are built on the $L^p$ spaces, so that for example the reflexivity also applies to $W^{k,p}$.

In the class of Sobolev functions one could e.g. take $\Gamma := \{f \in W^{1,p}([0,1]) \mid f(0) = f(1) = 0\}$.]

One then tries to show (using properties of $\Phi$) that for a sequence $(f_n)_n \in \Gamma$ with $\Phi(f_n) \rightarrow \inf_\Gamma \Phi$, the sequence $(f_n)_n$ is already bounded and then extracts $(f_{n_k})_k$ with $f_{n_k} \rightarrow f$ weakly.

One then tries to show (using properties of $\Gamma$ and $\Phi$) that $f \in \Gamma$ as well as $\Phi(f) = \inf_\Gamma \Phi$.

If all this can be done, the existence of a minimizer (namely $f$) follows.

One can then often use this method to obtain solutions to (partial) differential equations, because a minimizer of a functional (which could be given by $\int_0^1 F(x, f(x), f'(x)) dx$ for example) will satisfy the so-called Euler-Lagrange-Equations.