Intuition behind Fourier and Hilbert transform
In these days, I am studying a little bit of Fourier analysis and in particular Fourier series and Fourier/Hilbert transforms. Now, I am confident with the mathematical definitions and all the formalism, and (more or less) I know all the main theorems. What I don't really understand is why they are so important, why these concepts are defined in that precise way.
Could you explain to me why all these concepts/tools are so significant and useful in (applied) mathematics? Could you give me some intuition behind them?
I am not particularly interested in mathematical formulae. I would simply like to know what these definitions really mean.
Pretend to be talking with someone smart, very curious but not very knowledgeable about mathematics.
Of course, I encourage not only mathematicians but also engineers and physicists to reply. Having a truly physical interpretation of those concepts would be great!!
Very last thing: I would really love to have some unconventional and "personal" interpretation/point of view.
Thank you very much for any help!!
Solution 1:
The Fourier transform diagonalizes the convolution operator (or linear systems). In other words, if you find convolution non-inuitive, it gets simplified into a simple point-wise product. It happens that the eigenvectors are cisoids (or complex exponentials), hence it gives you a frequency-like interpretation. An operator that makes an essential operation simpler, like the $\log$ turns multiplies into adds, is an important one. [EDIT1: see below for details].
The Hilbert transform is even more important. It turns a real function into its most "natural" complex extension: for instance it turns a $\cos$ into a cisoid by adding $\imath \sin$ to it. Thus, the complex extension satisfies Cauchy–Riemann equations.
Hilbert remains quite mysterious to me (Fourier as well, to be honest, I studied wavelet to understand Fourier). S. Krantz writes, in Explorations in Harmonic Analysis with Applications to Complex Function Theory and the Heisenberg Group, Chapter 2: The Central Idea: The Hilbert Transform:
The Hilbert transform is, without question, the most important operator in analysis. It arises in so many dierent contexts, and all these contexts are intertwined in profound and influential ways. What it all comes down to is that there is only one singular integral in dimension 1, and it is the Hilbert transform. The philosophy is that all signicant analytic questions reduce to a singular integral; and in the first dimension there is just one choice.
[EDIT1] We talked about Fourier transforms as they were unique. Let us keep them loose. There are many Fourier flavors. In the continuous case, you can look for explanations in Fourier transform as diagonalization of convolution. In a discrete case, convolution can be "realized" with (infinite) Toeplitz matrices. In the finite length setting, cyclic convolution matrices can be diagonalized by the Fast Fourier transform.
[EDIT] In addition, F. King has produced two volumes of a book on the Hilbert transforms in 2009.
Solution 2:
The Fourier series is a way of building up functions on $[-\pi,\pi]$ in terms of functions that diagonalize differentiation--namely $e^{inx}$. If $L=\frac{1}{i}\frac{d}{dx}$ then $Le^{inx}=ne^{inx}$. That is $e^{inx}$ is an eigenfunction of $L$ with eigenvalue $n$. The fact that all square integrable functions on $[-\pi,\pi]$ can be expanded as $f = \sum_{n=-\infty}^{\infty}c_n e^{inx}$ is quite a nice thing. If you want to apply the derivative operator $L$ to $f$, you just get $Lf = \sum_{n=-\infty}^{\infty}nc_ne^{inx}$. More generally, if $f$ has $N$ square integrable derivatives, then the $N$-th derivative is $$\frac{1}{i^{n}}f^{(n)}=L^{N} f = \sum_{n=-\infty}^{\infty}n^{N}c_n e^{inx}.$$
Diagonalizing an operator makes it easier to solve all kinds of equations involving that operator. The only issue is this: How do you find the correct coeffficients $c_n$ so that you can expand a function $f$ in this way? For the ordinary Fourier series, $$ c_n = \frac{1}{2\pi}\int_{-\pi}^{\pi} f(t)e^{-int}dt. $$ On a finite interval, this is great. But what happens if you want to work on the entire real line? If you work on larger and larger intervals, then you get more and more terms. You need terms with larger and larger periods, and all multiples of those. In the limit of larger intervals, you need an integral to sum up all the terms, with every possible periodicity. That is, you can expand a square integrable $f$ as $$f(x) = \int_{-\infty}^{\infty}c(s)e^{isx}ds. $$ As before, applying operations of $L=\frac{1}{i}\frac{d}{dx}$ is easier using this representation of $f$: $$ \frac{1}{i^{N}}f^{(N)}(x)= L^{N}f = \int_{-\infty}^{\infty}s^{N}c(s)e^{isx}ds. $$ You can see that the discrete and the continuous cases are remarkably similar. And, based on that, how might you expect to be able to find the coefficient function $c(s)$? As you might guess, $$ c(s) = \frac{1}{2\pi}\int_{-\infty}^{\infty}f(x)e^{-isx}dx. $$ The Fourier transform is a way to diagonalize the differentiation operator on $\mathbb{R}$.
The reason that the discrete and continuous Fourier transforms are so important is that they diagonalize the differentiation operator. One way to view the effects of diagonalization is that you turn the operator into a multiplication operator. You can see how that makes solving differential equations a lot easier. In the coefficient space all you do is to divide in order to invert.
It's the same way with a matrix: if you have a big matrix equation $$Ax = y,$$ and if $A$ is symmetric, then you can find a basis $\{ e_1,e_2,\cdots,e_n \}$ where $Ae_k = \lambda_k e_k$. Then, if you can expand $x$ and $y$ in this basis $$x = \sum_{k=1}^{n} c_k e_k \\y = \sum_{k=1}^{n} d_k e_k.$$ Then the new equation is solved by division: $$Ax = y \\ \sum_{k=1}^{n} c_k \lambda_k e_k = \sum_{k=1}^{n} d_k e_k \\ c_k = \frac{1}{\lambda_k} d_k e_k.$$
So if you know how to expand $y$ in the $e_k$ terms as $\sum_{k}d_k e_k$, then you can get the solution $x$ by division on the coefficients $$x = \sum_{k=1}^{n} \frac{1}{\lambda_k} d_k e_k$$ (Assuming none of the $\lambda_k$ are $0$.)
The discrete and continuous Fourier transforms are a way to diagonalize differentiation in an infinite-dimensional space. And that allows you to solve linear problems involving differentiation.
Hilbert Transform: The Hilbert transform was developed by Hilbert to study the operation of finding the harmonic conjugate of a function. For example, the function $f(z) = z^2=(x+iy)^2=x^2-y^2+i(2xy)$ has harmonic real and imaginary parts. Hilbert was trying to find a way to go between these two components (in this case $x^2-y^2$ to $2xy$.) The setting of this transform is the upper half plane. If you start with a function $f(x)$, find the function $\tilde{f}(x,y)$ that is harmonic in the upper half plane, and then find $g(x,y)$ such that $f(x,y)+ig(x,y)$ is holomorphic in the upper half plane, then the Hilbert transform maps $f$ to $g$.
Because $i(f+ig)=-g+if$ is also holomorphic, then the transform maps $g$ to $-f$, which means that the square of the transform is $-I$. In this setting, the Hilbert transform turns out to be concisely expressed in terms of the Fourier transform if you work with square integrable functions.