Spotting crankery

Underwood Dudley published a book called mathematical cranks that talks about faux proofs throughout history. While it seems to be mostly for entertainment than anything else, I feel it has become more relevant in modern mathematics. Especially with the advent of arXiv, you can obtain research papers before they are peer reviewed by a journal. So how does one tell between a crank proof and a genuine proof? This seems to be tough to discern in general.

For instance Perelman's proof was not submitted to any journal but published online. How did professional mathematicians discern that it was a genuine proof?

So how does one spot a crank proof? It seems that John Baez once (humorously) proposed a "crackpot checklist". Would this seem like a fair criterion?


This really should be a comment: but it is a bit long and perhaps important enough (since it addresses a common misconception) so I will post it as a community wiki. I want to address the mention of Perelman's proof in the original post.

Firstly, one should be aware that Perelman, despite his current somewhat indistinct status, is and was a professional mathematician. He has a pretty impeccable academic pedigree from Soviet Union, and has worked in research positions in major North American universities in the 1980s and 1990s. Furthermore, the geometrization conjecture is not his only contribution to the field: in 1994 he was already quite renowned for having proven the Cheeger-Gromoll Soul Conjecture which stood open for 20 years, and he was also known for his work in comparison theorems in Riemannian geometry. So not only was he a known variable to the other professional mathematicians on a sociological level (having worked both in the east and in the west [using the cold-war era splitting]), he was a well-known variable in the sense that he has already demonstrated extreme technical proficiency. (Note, his proof of the soul conjecture was published in traditional manner in the prestigious Journal of Differential Geometry.)

In short: Perelman is not just some nobody who makes an astonishing claim.

Secondly, it was not clear from the get-go that Perelman did not intend to publish his proof in traditional media. Furthermore, he actually did not ostensibly claim a proof of the geometrization conjecture (emphasis mine):

... We also verify several assertions related to Richard Hamilton's program for the proof of Thurston geometrization conjecture for closed three-manifolds, and give a sketch of an eclectic proof of this conjecture, making use of earlier results on collapsing with local lower curvature bound.

(Note that the above e-print was followed by this and this.) Furthermore, one should factor in the fact that for the prestigious Annals of Mathematics (where results of this nature would usually be submitted to), the average time from submission to acceptance is 19 months, with the average time from submission to print being 3 years. Many of the groundbreaking results in mathematics would've been read, torn apart, verified, understood, and possibly even improved upon before the paper copy actually hit the shelves (in this case it took a little bit longer, the full verification was completed by several groups working independently by 2006; this is partly due to one particular theorem in Perelman's papers that was only accompanied by a particularly sketchy proof). That is to say, whether a paper is important is usually not judged by the fact that it appeared first as a pre-print on arXiv.

Lastly, a very important point that is often glossed over in the popular account of such stories (not just that of Grisha Perelman but also that of Andrew Wiles) is that despite the solitary working styles of the main protagonists, they do not conjure their solution out of thin air entirely by themselves. In particular, based on already known works there already exist compelling evidence that the pursued line of attack has some chance of working. In the case of Wiles, he did not prove the FLT per se: instead he proved the modularity theorem for semistable elliptic curves. Following a strategy described by Gerhard Frey in 1984, two ingredients were needed to settle Fermat's Last Theorem. The first ingredient was concretely identified by JP Serre and was called the epsilon conjecture (now Ribet's Theorem) and was demonstrated to be true by Ken Ribet in 1986. The second ingredient is what is now called the modularity theorem, which was conjectured in the 50s and 60s as a general statement about elliptic curves (only some special cases of the general statement is needed for FLT) before its close connection with Fermat's Last Theorem was recognized. So in Wiles's case, the approach is already previously well justified, and that the technical ingredient ought to be true is already expected/hoped-for based on other previous works in algebraic number theory.

The setting for Perelman's work is not all that different. That Poincare conjecture is a subcase of Thurston's geometrization conjecture is well-known (and trivial). That geometrization holds in certain special cases (hyperbolization for Haken manifolds) was proven by Thurston. And as mentioned in the quoted abstract above, Richard Hamilton developed the Ricci flow at least in part due to his interest in the geometrization conjecture, and has established a program and identified the main technical ingredient needed (a good procedure for Ricci-flow with surgery) to use the Ricci flow machinery to prove geometrization. So at the level of the "large picture" the approach seems reasonable. Furthermore, the main insight, the "entropy formula", gives a nice and tangible description of the mechanism which drives the proof. While the details still needs to be checked, it is something that looks believable, especially given some of the justifications derived from theoretical physics.


In addition to Willie's answer/comment:

About a year ago, last June, a paper by Gerhard Opfer got a bit of attention for claiming to solve the Collatz Conjecture (it didn't). It was submitted to Mathematics of Computation, which may have given it the seeming credibility that propelled it into the spotlight (this is always a mystery - I don't know what made the recent kid-who-sort-of-solved-an-old-Newton-problem thing such a firestorm either). It even got to a question here.

This prompted me to write a short blog post about the Collatz Conjecture, Opfer's paper, and as a soft-answer to this soft-question, a bit about cranks and crank papers. (Ironically, writing that blog post somehow threw the spotlight on me as a destination for crank papers, and I've received a great many since.)

I think a large part of this aspect of the post can be summed up in two links: The Alternative-Science Respectability Checklist and Ten Signs a Claimed Mathematical Breakthrough is Wrong.

But I also happened across some articles from the writer-physicist or physicist-writer Jeremy Bernstein (much of whose work is published in periodicals like the New Yorker). He wrote an article called How can we be sure that Einstein was not a crank? (this is a link to a book containing the article), and he discusses two criteria for determining whether a new physics paper is from a crank or not.

The criteria don't quite port over to math so well, but there is an idea behind them that's true, just as the ideas behind the very humourous Ten Signs a Claimed Mathematical Breakthrough is Wrong are accurate in many ways. If I were to summarize some of his key points, I would say that Bernstein looks at 'correspondence' and 'predictiveness.' In the physics sense, 'correspondence' means that the result should explain why previous theories were wrong, and how the proposed idea agrees with experimental evidence. 'Predictiveness' is just what it says: a physics breakthrough should be able to predict some phenomena. If I were to cast these in a mathematical nature, I suppose 'correspondence' would say that the math shouldn't contradict things we already know (now we can solve all quintics with radicals, for example). But if the result is a big, old one, like Collatz or the Millenium problems, I should think that one needs to introduce something new so that there is some explanation of why it hadn't been done before. Predictiveness really doesn't port so well. I suppose that the strength of a mathematical result is sometimes measured in how much 'new math' it creates, and this is a sort of predictiveness... it's not a great match.

But I'd like to end by noting that sometimes, especially in math, simple arguments for nonsimple results (whatever that really means) exist. One of my favorite examples is the paper PRIMES is in P!, the paper detailing the AKS algorithm for quickly determining whether a given number is prime. The arguments are entirely elementary, despite how big the result it. And, funny enough, there is capitalization and excitement, indicators on some of the crank-checklists. Yet the result is valid.