I always have a hard time explaining the importance of rigor to my friends who are not mathematically minded. A lot of past mathematicians develop the foundations of today's mathematics without going through all the rigor but their work is still relevant now, e.g. Newton and Leibniz's calculus and Euler's work on infinite series.

Is there a historical event where a wrong conclusion is derived due to lack of rigor and cause a harmful effect?

Edit: Thanks so much for the lively discussion! I just want to point out that I'm not interested in counterintuitive results or paradoxes, instead I'm looking for instances where wrongly derived conclusion caused a disastrous effect.


Solution 1:

This was one of the top voted questions on Math Overflow. You'll find many interesting answers there.

Added: One of my favorite examples for regarding the unreliability of numerics is looking at the difference $\operatorname{li}(x)-\pi(x)$ where $\pi(x)=\sum_{p\leq x}1$ is the prime counting function, and $\text{li}(x)=\int_{2}^x \frac{1}{\log t}dt$ is the logarithmic integral. The prime number theorem tells us that $$\lim_{x\rightarrow \infty}\frac{\pi(x)}{\operatorname{li}(x)}=1,$$ but we can still ask what does $\operatorname{li}(x)-\pi(x)$ look like? If we create a plot going all the way to $10^{22}$, which is very far, it appears that $\operatorname{li}(x)-\pi(x)$ is always about $\sqrt{x}$. That is, it is increasing to infinity, and the data seem to suggest that we could find a monotonic function $f(x)$ about the size of $\sqrt{x}$ where $\operatorname{li}(x)-f(x)$ would be an even better approximation to $\pi(x)$.

However, this is not true at all. Littlewood proved that $\text{li(x)}-\pi(x)$ switches sign infinitely often, and we believe that the first time it does so is around $10^{316}$.

I first learned about this while reading the expository paper Prime Number Races by Andrew Granville and Greg Martin. I recommend reading it; it is accessible and interesting. The result and numerics I mentioned above are on pages 5 and 6. (At the start of section 2.)

Solution 2:

As your friends are non-mathematical, you might try examples they can understand--simple paradoxes. For example if the base and height of the red triangle is 1, then the length of each staircase below is two, so we have a simplistic argument that 2 = $\sqrt{2}$. This error, from a lack of rigor, would cause major troubles with Google maps, measuring to build a building ... (trying to stay non-mathematical here). There are many such easy paradoxes.

enter image description here

(I know this argument is wrong--the points is lack of rigor can be dangerous)

Added per @Steven's comments below:

There are times at which such naive dissection do give the correct answer, such as when we dissect a circle of radius $r$ into arcs and rearrange them into an approximate rectangle of (limiting) height $r$ and width circumference/2 = $\pi r$.

enter image description here

Without rigor, we can not know when to trust our 'reasoning'.

Solution 3:

The "era of rigor" in mathematics began about 150 years ago, a time in which a lot of the mathematics before this era was turned rigorous and cleared from many mistakes. For this reason it is not very trivial to come up with good examples without knowing a lot about mathematical history.

One of the reasons that pre-rigorous mathematics is still useful is that it was essentially created to help developing physical theories. There was no real theoretical mathematics as we have today, and in fact if you look closely you will see that many mathematicians from that time were also physicists, chemists and alchemists (not to mention dark wizards and theological nuts).

I can give a similar example about the axiom of choice, which as a mathematically inclined person you can probably relate to. Consider the mathematicians of exactly a century ago. Many refused to accept the axiom of choice and it was not fully clear whether or not it is consistent or preposterous. Only after Gödel's work people began relaxing the constant questioning (although you still see people today which claim that the axiom of choice is false).

Compare the above situation to the common mathematician of 2012. The vast majority of mathematicians just assume the axiom of choice so bluntly that they use it for things which it is not even close to being needed for. In fact I hear set theorists who admit they have absolutely no intuition whatsoever how things work without the axiom of choice. Of course this can be remedied, but people will not really put that much effort into because they prefer to use their time for better things.

The case is similar to the importance of rigor because we are so deep into the age of rigor that we have essentially forgotten how non-rigorous mathematics look like. You can see it under the surface when talking to a mathematician on something that he is currently working on, but even then people are usually tend to be careful with their words as to avoid claiming false claims.

The most important use of rigor, I think, comes from logic and set theory in the form of the various paradoxes of set theory. When set theory was first developed by Cantor there was still not enough rigor in mathematics in order to well-define all the notions needed (Cantor's paradox is an exceptionally good example). One excellent example is König's theorem. König figured out a contradiction in set theory but it was a mistake in his understanding originating in a not-so-rigorous proof of some claim that he read. The result is actually one of the basic theorems in cardinal arithmetics.

It is important to understand that once we began dealing with the transfinite it was no longer "intuitive" to our physical intuition how things should work. This caused many people reject these ideas and belittle them (transfinite related ideas, the axiom of choice, and so on). Nowadays we know that if one does not use intuition coming from the physical world, but rather develops a mathematical intuition based on the definitions and the axioms of mathematics - one can have a good grasp how the transfinite basics work.

Other good examples are the Weierstrass function which is continuous but non-differentiable anywhere. Before that it was commonly believed that all continuous functions were differentiable almost everywhere. Gauss himself used topological arguments to prove the fundamental theorem of algebra, although he did not prove these arguments.

All in all, in mathematics we have "notion" and "definition". Rarely these two concepts coincide but usually we think about objects in one way and we define them to be approximately what we would like them to be. The introduction of rigor made sure that we work by the definition, and step by step while being guided by the notion - and not the other way around.