What is the main purpose of learning about different spaces, like Hilbert, Banach, etc?
Solution 1:
$L^2$ function spaces arose out of Parseval's identity for the Fourier series, an identity that was known by the late 1700's: $$ \frac{1}{\pi}\int_{-\pi}^{\pi}|f(t)|^2dt = \frac{1}{2}a_0^2+\sum_{n=1}^{\infty}a_n^2+b_n^2, $$ where the Fourier series for $f$ is $$ f(x) \sim \frac{a_0}{2}+\sum_{n=1}^{\infty}a_n\cos(nx)+b_n\sin(nx). $$ That establish a connection between square integrable functions and an infinite-dimensional Euclidean space with sums of square of coordinates. Not much was made of this connection at first. The Cauchy-Schwarz inequality for complex spaces would not be stated by Cauchy for another couple of decades (Schwarz was not attached to the original inequality bearing Cauchy's name, only Cauchy.) In between, Fourier started his work on Heat Conduction, separation of variables and more general orthogonal expansions arising from these methods. Decades passed before, around 1850-1860, Schwarz published a paper on solutions of minimization problems where he derived the Cauchy-Schwarz inequality for integrals, and it was realized that the inequality gave the triangle inequality. A new concept of distance and convergence was emerging.
Over the next few decades, these ideas led Mathematicians to consider functions as points in a space with distance and geometry imposed through norms and inner-product. That was a game-changing abstraction. During this period of abstraction, a real number was defined for the first time in a rigorous way, after roughly 24 centuries of trying to figure out how to make sense of irrationality. Compactness was discovered, and abstracted to sets of functions through equicontinuity. Fourier's ideas were being cast into the context of the new, rigorous Math. Riemann developed his integral, and by the early 1900's, Lebesgue has defined his integral, both with the stated goal of studying the convergence of Fourier series.
Cantor, Hilbert, and many others were laying the rigorous, logical foundations of Mathematics, and Hilbert abstracted the Fourier series to consider $\ell^2$ as an infnite dimensional generalization of Euclidean space. Topology was being created through abstract metric and then through neighborhood axioms in the new set theory. Function spaces were now fashionable, with $\ell^2$, $L^2$ leading the way. Early in this 20th century evolution, one of the Riesz brothers looked at continuous linear funtionals on $C[a,b]$, and represented them as integrals. The idea of continuity of functionals was just being explored. Functional Analysis was born, and there was a push to explore abstract function spaces. Representing functionals was the order of the day. $L^p$ was a natural abstraction that cemented the idea of the dual as having to be separate and distinct from the orignal space. Hahn and Banach both discovered how to extend continuous linear functionals. Before this period in the early part of the 20th century, there was no distinction of a space and a dual. $L^p$ spaces became an important part of decoupling the space and its dual, and providing convincing evidence that it was necessary to do so.
Then there was a move toward abstract operators, with Hilbert and von Neumann leading the way. By the time Quantum Mechanics arrived, all the pieces were in place to be able to lay a foundation for Quantum Mechanics. Hilbert had already studied symmetric operators. Spectrum of operators was defined well before it was realized that operators were a perfect fit for Quantum, where it was later found that the Mathematician's spectrum was actually the Physics spectrum! von Neumann had proved the Spectral Theorem for selfadjoint operators.
Topological ideas abstracted from convergence, algebras of operators, functions, etc., set off a mushroom cloud of thought, helping to lead to other mushroom clouds.
Solution 2:
I'm not in this area but I can say you that the main issue here is the application of this kind of Mathemathics to Quantum Mechanics. Indeed, even if Hilbert didn't started studying the argument with this in mind it was soon finded that this branch of mathemathics was really suitable to modelize Quantum phenomenas.
Indeed what happened is that soon after the works of Hilbert this apparatus became absolutely necessary to even formulate a quantum mechanical problem in the Heisenberg-Von Neumann framework. Indeed Heisenberg et al. formulated some axioms that were at the heart of QM, i.e.
- a Quantum System is a separable Hilbert space;
- the observables (i.e. the quantities you can observate) are selfadjoint operators onthat space;
- -if we are not so picky- the states in which the system is are the vectors of the hilbert space; etc...
Now, couldn't we just learn about the Hilbert spaces and forget everything else? In fact no for few reason. One being that indeed we can reformulate the Heisenberg Axioms in a slighty general way which is the Von Neumann framework of C*-algebras. In this framework it is not required for the space to be an Hilbert space but only to be a Banach space (i.e. with a norm and not an innerproduct) such that the norm is well behaved with the involution or *. So knowing Banach spaces would be clearly important in this framework.
But even if you're kind of more down-to-earth guy and you just want to compute few spectrums of few operators, you will soon realize that operators behave in a very different way depending on the space they are defined. A classical example could be the moment operator which could be symmetric, adjoint, essentially self-adjoint, only changing few conditions on the extremes of an interval.
But who cares if this operator is selfadjoint or itsn't? Well, unfortunately you care because an operator (i.e. a physical quantity) is an observable with real spectrum (i.e. gives a physical result) if and only if it is selfadjoint (which means that if it not selfadjoint you want be able to compute the results of your experiments). So you may want variate you set of definition and maybe even your Hilbert space to get something where your operator is well behaved and maybe even selfadjoint. In this process going to Sobolev Spaces, $L^2$(something), etc.. according to your needs is a everyday procedure and so you will probably need to know what you're doing if you want to hope having some result.
But that's not all: since You also may want to apply two observable (e.g. position and momentum) one after the other you may want to introduce a Space where you can do it without loosing every self-adjoint propriety. So what you do is to define a Schwartz space and use this space instead of your first Hilbert space. And so on.
In fact everything that was developed in this field was indeed not at all speculative and really focused on real effective everyday problems. For example the introduction of separable Hilbert spaces it's because you want an orthonormal base on it like in the old fashioned vector spaces, or the introduction of traceclass operators, projection-valued measures etc... Indeed everything here was developed just the right way to work.
It's like driving a car developed by 1 million world-class engineers which worked on the same car for more ore less 100 years travelling all sorts of roads in all sort of places. It's clear that everything has it's reason and it is not always immediate to understand why you have some things the way they are until you find yourself in a place where you've never been and then you suddenly understand what the red botton you had everytime on your right was about.