Whittaker gives two proofs of Fourier's theorem, assuming Dirichlet's conditions. One proof is Dirichlet's proof, which involves directly summing the partial sums, is found in many books. The other proof is an absolutely stunning proof of Fourier's theorem in terms of residues, treating the partial sums as the residues of a meromorphic function and showing that, on taking the limit, we end up with Dirichlet's conditions.

My question is about understanding the latter half of the residue proof, given here. The jist of the proof is to consider a trigonometric series with real coefficients, assume the coefficients are Fourier coefficients of a function $f$, and then simplify the partial sum

\begin{align} S_k(f) &= a_0 + \sum_{m=1}^k (a_m \cos(mz) + b_m \sin(mz)) \\ &= \frac{1}{2 \pi} \int_0^{2 \pi} f(t)dt + \frac{1}{\pi} \sum_{m=1}^k \int_0^{2 \pi} f(t)\cos[m(z-t)] dt \\ &= \sum_{m=-k}^k \frac{1}{2\pi} \int_0^{2 \pi} f(t)e^{im(z-t)} dt \\ &= \sum_{m=-k}^k \frac{1}{2\pi} \int_0^z f(t)e^{im(z-t)} dt + \sum_{m=-k}^k \frac{1}{2\pi} \int_z^{2 \pi} f(t)e^{im(z-t)} dt \\ &= U_k + V_k. \end{align} Next we try to turn $U_k$ into the sum of the residues of a meromorphic function derived from this, so try to modify it: \begin{align} U_k(z) &= \sum_{m=-k}^k \frac{1}{2\pi} \int_0^z f(t)e^{im(z-t)} dt \\ &= \sum_{m=-k}^k \frac{w}{2\pi w} \int_0^z f(t)e^{w(z-t)} dt |_{w = im, m \neq 0} \\ &= \sum_{m=-k}^k \frac{w}{1 + 2\pi w - 1} \int_0^z f(t)e^{w(z-t)} dt |_{w = im, m \neq 0} \\ &\to \frac{1}{1 + 2\pi w + \dots - 1} \int_0^z f(t)e^{w(z-t)} dt \\ &= \frac{1}{e^{2 \pi w} - 1} \int_0^z f(t)e^{w(z-t)} dt \end{align} to find $$\phi(w) = \frac{1}{e^{2 \pi w} - 1} \int_0^z f(t)e^{w(z-t)} dt$$ so that, if $C_k$ is a circle in the $w$ plane containing $0,i,-i,2i,-2i,\dots,ki,-ki$ and no more poles, say of radius $k+1/2$, we see $$ \frac{1}{2 \pi i} \int_{C_k} \phi(w) dw = U_k.$$ From this we integrate over the boundary explicitly via $w = (k + 1/2)e^{i\theta}$ so that $U_k$ reduces to $$U_k = \frac{1}{2 \pi} \int_0^{2 \pi} w \phi(w) d \theta$$ and from here on we are supposed to end up with Dirichlet's conditions.

Can anybody explain the rest of the proof? Since this aspect of the proof seems to be the crux of other flawed proofs, need to make sure I get the rest of it with no hand-waving, seems unmotivated.

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here

enter image description here


The motivation is that you're trading the sum of all finite residues for a single "residue at infinity." For example, if you had only a finite number of poles in the plane for a holomorphic function $F$, and $\Gamma$ enclosing those poles in its interior, then the sum of the residues would be $$ \frac{1}{2\pi i}\oint_{\Gamma}F(\lambda)d\lambda = -\frac{1}{2\pi}\oint_{1/\Gamma}F(1/\mu)\frac{1}{\mu^2}d\mu. $$ The negative would cancel the new negative orientation of $1/\Gamma$, and the end result of the integration as you let $\Gamma$ expand without bound in, say, a circle, would be $$ \lim_{\mu\rightarrow 0}F(1/\mu)(1/\mu)= \lim_{\lambda\rightarrow \infty}\lambda F(\lambda). $$ There are lots of issues in how that limit is achieved, but that's the basic idea. This idea works very nicely for matrices and operators, too. Fredholm was the pioneer in this type of analysis, and his work fueled earliest forms of Spectral Theory through the use of the resolvent operator $R(\lambda)=(L-\lambda I)^{-1}$. (Resolvent was a term coined by Fredholm, who also was the first to define a linear operator.) For example, if you have an $N\times N$ selfadjoint matrix $L$, then you can show that $(\lambda I-L)^{-1}$ has simple poles at the eigenvalues, and the residue at such a pole is the projection onto the eigenspace associated with that eigenvalue. Then completeness of the eigenfunctions is a consequence of the fact that the sum of these residues is $$ \lim_{\lambda\rightarrow\infty}\lambda (\lambda I-L)^{-1} = \lim_{\lambda\rightarrow\infty}\frac{\lambda}{\lambda I-L}=I. $$ And you really can make this rigorous because you can show the above limit is $I$ for any $N\times N$ matrix. Furthermore, regardless of the matrix, you can show that the residue at an eigenvalue is a projection onto the space spanned by the Jordan blocks associated with the eigenvalue. So you have completeness due to an unusual conservation law associated with holomorphic functions. By looking on the Riemann sphere, you see that you can trade the finite residues for the residue at $\infty$. For normal and selfadjoint matrices, all poles are first order the and the residues are the projections onto the corresponding eigenspaces. And the Complex Analysis trick then shows that the sum of all of the projections onto eigenvalues of a normal matrix must sum to the identity $I$, thereby proving that the eigenvectors form a basis.

This trick also works for more general operators, such as the differentiation operator $Lf = \frac{1}{i}\frac{d}{dx}$ on $L^2[-\pi,\pi]$ where the domain is chosen to consist of continuously differentiable periodic functions with $f' \in L^2$ (absolutely continuous is even better.) The resolvent can be computed directly for this operator by solving $(L-\lambda I)g=f$ for $g$ as a function of $\lambda$. This is done by assuming a given $f$, and solving a first order ODE for $g$ as a function of $\lambda$: $$ \frac{1}{i}g'-\lambda g = f \\ g(0)=g(2\pi) $$ It is my understanding that Cauchy came up with the proof given by Whittaker by considering this equation. To solve this equation multiply by $i$ and then by an integrating factor $e^{-i\lambda t}$ to obtain $$ e^{-i\lambda t}g'-i\lambda e^{-i\lambda t}g = ie^{-i\lambda t}f \\ \frac{d}{dt}(e^{-i\lambda t}g)=ie^{-i\lambda t}f \\ e^{-i\lambda t}g(t) = i\int_{0}^{t}e^{-i\lambda s}f(s)ds + C \\ g(t) = ie^{i\lambda t}\int_{0}^{t}e^{-i\lambda s}f(s)ds+Ce^{i\lambda t} $$ The constant $C$ is determined by requiring periodicity: $$ C = g(0) = g(2\pi)=ie^{2\pi i\lambda}\int_{0}^{2\pi}e^{-i\lambda s}f(s)ds+Ce^{2\pi i\lambda} \\ C(1-e^{2\pi i\lambda})=ie^{2\pi i\lambda}\int_{0}^{2\pi}e^{-i\lambda s}f(s)ds \\ C = \frac{ie^{2\pi i\lambda}}{1-e^{2\pi i\lambda}}\int_{0}^{2\pi}e^{-i\lambda s}f(s)ds $$ Therefore, $g(t,\lambda)$ is given by $$ g(t,\lambda)=\left(i\int_{0}^{t}f(s)e^{-i\lambda s}ds +\frac{ie^{2\pi i\lambda}}{1-e^{2\pi i\lambda}}\int_{0}^{2\pi}e^{-i\lambda s}f(s)ds\right) e^{i\lambda t} $$ Notice that the residues of this expression are negatives of the projections onto the one-dimensional eigenspaces associated with $e^{int}$. $$ R_n = -\frac{1}{2\pi}\int_{0}^{2\pi}f(s)e^{-ins}ds e^{int} $$ (The negative is because of using $(L-\lambda I)^{-1}$ instead of $(\lambda I-L)^{-1}$.) You can write this in a more symmetric form simply by splitting the integral over $[0,2\pi]$ into integrals over $[0,t]$ and $[t,2\pi]$, and that's how the analysis is carried out for $\lim_{\lambda\rightarrow\infty}\lambda g(t,\lambda)$, after noting that the $g$ associated with a constant $f \equiv C$ is easily found to be $-C/\lambda$ because a constant function is periodic, and $(\frac{1}{i}\frac{d}{dt}-\lambda)\frac{-C}{\lambda}=C$. (I believe Whittaker's analysis stems from the use of the symmetric form.)

Reference: E. C. Titchmarsh, Eigenfunction Expansions Associated with Second Order Differential Equations

Titchmarsh proves a general expansion theorem that includes the ordinary Fourier case in the first 20 pages of the first chapter. Titchmarsh was student of G. H. Hardy, and pioneered much of the rigorous pointwise analysis for this subject.

Further Detail: Rewrite $g(x,\lambda)=R(\lambda)f$ as \begin{align} R(\lambda)f & = \frac{i}{1-e^{2\pi i\lambda}}\left\{ \int_{0}^{x} e^{i\lambda (x-t)}f(t)\,dt - \int_{x}^{2\pi}e^{i\lambda(2\pi-(t-x))}f(t)\,dt\right\} \\ & = \frac{i}{1-e^{-2\pi i\lambda}} \left\{ -\int_{0}^{x} e^{-i\lambda(2\pi-(x-t))}f(t)\,dt + \int_{x}^{2\pi}e^{-i\lambda(t-x)}f(t)\,dt \right\} \end{align} The first form is convenient for examining the resolvent for $\Im\lambda > 0$, and the second is convenient for $\Im\lambda < 0$. When examining the integral of the resolvent on a circle of half-integer radius $|\lambda|=N+1/2$, the function $1/(1-e^{2\pi i\lambda})$ is uniformly bounded by a constant $M$ for $\Im\lambda \ge 0$, and the exponentials in the integrals are well-behaved. A similar analysis may be carried out using the second form for $\Im\lambda \le 0$.

More to come later ...