How rare is it that a theorem with published proof turns out to be wrong?
There is a story I read about tiling the plane with convex pentagons.
You can read about it in this article on pages 1 and 2.
Summary of the story: A guy showed in his doctorate work all classes of tiling the plane with convex pentagons, and proved that they are indeed all possible cases. Later, riddle was published in popular science magazine to find all these classes. One of the readers found a tiling that did not belong to any of the classes, and so the claim and the proof turned out to be wrong.
Reading this made me think about some questions.
Is it rare when a theorem was proved and the proof was published, and later it turned out that the theorem is wrong?
Can we somehow guess how many theorems out there that we think are right, but actually are wrong? I bet that if in our case, the theorem was about tiling in $R^3$ nobody would ever notice.
What can be the effect of such theorems on mathematics in general? Can it be a serious issue for mathematics even if the wrong theorems were not very important, but still, some stuff was based on them?
Solution 1:
It is not rare, but is uncommon, for false theorems to be published. It is somewhat more common, although still not frequent, for flawed proofs to be published, even though the theorem as stated is correct. One way to find these is to search for "correction", "corrigendum", or "retraction" on MathSciNet. Peer review can catch some of these errors, but mathematics is a human field in the end.
Every specialty has its own anecdotes about flawed proofs that are used to scare some caution into graduate students. For example, I would guess many logicians and analysts have heard how Lebesgue falsely claimed in print that every analytic set is Borel (in 1905).
In principle, the discovery of a flawed proof could mean that people have to re-check many other results. But in practice the effect is usually localized. Many researchers are cautious about using new results "blindly", once they realize that errors are not rare. In particular, being cautious means making sure you understand how to prove the results you use in your own proofs, whenever possible, so that nobody can later say your proof is flawed. By the time you are working on research papers, it becomes extremely unsatisfying to use a result of someone else as a "black box" without understanding it. On the other hand, it would be perfectly possible for a flawed proof to linger for a while if nobody else needs to use the result.
One of the roles of monographs and textbooks is to give another vetting to the theorems. A result that is proved in secondary books is somehow more reliable than a result that can only be cited to one research paper. This is another reason that errors tend to cause only localized problems, because by the time a result becomes standard in its field, a large number of mathematicians will have looked at it.
Solution 2:
I think Carl Mummert's answer is spot-on. One other thing worth mentioning is that more often than being wrong, or even having a recognizably flawed proof, are papers which are just really hard to understand. (This is sometimes compounded by language barriers... more than once I've had to deal with papers that were written in Russian, then either never translated or translated incoherently.) Some of these proofs are not complete, others are complete but hard to understand, some are mostly correct but have a few errors, and so on...
So people generally take a cautious approach. One good piece of advice my PhD adviser in grad school told me, never use a result whose proof you don't understand. Some people rely on proofs that they don't understand but that are accepted and many people understand, so they figure it's an acceptable risk. Others (I'm one of them) won't use any proof they don't understand. This may be more doable in some fields than others. So while there are wrong theorems out there, the effect is minimized by people recognizing this fact and proceeding with caution.