Did Joseph Fourier ever make a pure mathematical mistake?

Solution 1:

Anyone back then working on anything that had anything to do with calculus or related topics could hardly avoid making mistakes, since there simply was no logically coherent formulation of the basic definitions at that time. Trying to prove something about continuous functions without a definition of continuity is going to lead to problems.

Fourier in particular is famous for stating that any periodic function is equal to the sum of its Fourier series. This is nonsense (see the comment below). But it's one of the all-time great errors. Trying to make sense of this, to see what could actually be proved in this direction, was one motivation for the development of modern rigorous analysis. In fact sorting this out was part of the motivation for at least three major developments that spring to mind:

  • People like Cauchy, Weierstrass et al invent epsilons and deltas. Now we can actually state and prove things about calculus rigorously.

  • But the theory of Fourier series, although it now made sense logically, still didn't work as well as we'd like; Lebesgue and others invent the Lebesgue integral and the theory of Fourier series gets a big boost.

  • Cantor was actually led to set theory, in particular transfinite numbers, in the course of investigations into Fourier series! (When you're studying sets of uniqueness for trig series the notion of the "derived set" $E'$ of $E$ comes up; this is the set of limit points of $E$. Then one can consider $E''$, etc; this leads naturally to a study of $E^\alpha$ for infinite ordinals $\alpha$.)

(The first two items above are hugely well known. For more on the third, regarding Cantor, set theory and Fourier series, you might look here or here. Will R suggests you look here; I haven't seen that, internet too slow for YouTube, but a lecture by Walter Rudin on the topic is certain to be great.)


Comment I had no idea that the assertion that there exists a (continuous) function with a divergent Fourier series would be controversial. Writing down an explicit example is not easy; any continuous function that Fourier ever encountered does have a convergent Fourier series.

But proving the existence is very simple, from the right point of view. Say $s_n(f)$ is the $n$-th partial sum of the Fourier series for $f$ and $D_n$ is the Dirichlet kernel, so that $$s_n(f)(0)=\frac1{2\pi}\int_0^{2\pi}f(t)D_n(t)\,dt.$$The norm of $s_n(f)(0)$ as a linear functional on $C(\Bbb T)$ is the same as the norm of $D_n$ regarded as a complex measure, which is in turn equal to $\|D_n\|_1$. It's easy to see that $\|D_n\|_1\ge c\log n$. So the Uniform Boundedness Principle, aka the Banach-Steinhaus Theorem, shows that there exists $f\in C(\Bbb T)$ such that $s_n(f)$ is unbounded.

Solution 2:

Presumably the reference is to:

  • Imre Lakatos, Proofs and refutations: The logic of mathematical discovery (1977), Appendix 1.1 Cauchy's Defence of the 'Principle of Continuity', page 127-on,

referring to Fourier's example of convergent series of continuous functions which tends to a Cauchy discontinuous function, into Fourier's Mémoire sur la Propagation de la Chaleur (1812).

But we are not speaking of "mistakes" as calculation errors or things like that; what Lakatos is discussing are "exceptions" to some general theorem where the proof neglects some condition necessary for the general validity of the proof.

Solution 3:

Fourier was right about his conjecture concerning expanding functions in a Fourier series, when you consider what the "functions" were at the time. It's fun to pounce on Fourier, and I noticed a lot of people have drunk that Kool-Aid. However, the facts don't support what people typically claim. Set Theory didn't exist, and general functions were not conceived. The general functions at the time of Fourier were piecewise arcs, and Fourier did demonstrate the convergence to the mean of the left and right limits for such functions. It is false that Dirichlet gave the first proof of this fact. In fact, Dirichlet's proof was almost identical to that given by Fourier, and Fourier gave the "Dirichlet kernel." It is very possible that Dirichlet took his proof from Fourier's manuscript that had been denied publication. It is true that Fourier also gave several wrong demonstrations, but the Dirichlet kernel proof should really be called the Fourier kernel.

Imagine doing what Fourier did in a time when the following had not yet been defined: (1) The Riemann integral (2) a Real Number (3) Set Theory and general functions (4) Completion of a space and Convergence of a Cauchy Sequence (5) Functional Analysis (6) Inner Product Space (7) The Cauchy-Schwarz inequality. It's important to keep Historical perspective, and to keep in mind that a large part of Analysis came out of trying to resolve Fourier's claims.

Quoting from the well-regarded 1926 Introduction to the Theory of Fourier's Series and Integrals by H. S. Carslaw,

Debunking False Claims

Solution 4:

In addition to other good answers, it might be worth noting that if we do not insist that functions be "pointwise", and do not insist that convergence of partial sums of Fourier series be pointwise, then there's no difficulty in making a very broad interpretation (not Fourier's original) that "everything is represented by its [Fourier] series" completely legitimate. A starting point is the (originally disturbing-to-me) fact that while Fourier series of continuous functions do converge in $L^2$, there are problems with pointwise convergence. But no problems with $L^2$ convergence even for $L^2$ functions. The simplicity of the Hilbert-space aspects is exploited in ($L^2$-) Sobolev spaces (initiated c. 1906-7 by Beppo Levi and G. Frobenius in the guise of "energy norms", and then systematically by Sobolev in the 1930s)... which leads to the assertion that every distribution on the circle has a Fourier series converging to it in a suitable Sobolev space. Termwise differentiation is always justified (if interpreted distributionally). And so on.

E.g., the objection that $\sum_{n\in\mathbb Z} 1\cdot e^{2\pi inz}$ does not converge pointwise is essentially irrelevant to the provable fact that it does converge to the periodic Dirac $\delta$ in the Sobolev space $H^{-1/2-\epsilon}$ for every $\epsilon>0$, where $H^s$ is the completion of smooth functions on the circle with respect to the $H^s$ norm defined at first on test functions by $|f|^2_s=\sum_{n\in\mathbb Z} |\widehat{f}(n)|^2\cdot (1+n^2)^s$.

My interpretation of such possibilities is that, in many applications (both to physical sciences and to more esoteric mathematical situations) the reason Fourier series work so well is that, despite our inherited penchant for worrying about pointwise behavior, it's not pointwise behavior that matters very much, but, rather, various averaged versions.