Are there any common practices in mathematics to guard against mistakes?

It occurred to me that math is somewhat like programming (or vice-versa, if you prefer) because, in both, it is easy to make mistakes or overlook them, and the smallest error or misguided assumption can make everything else go completely wrong.

In programming, in order to make code less error-prone and easier to maintain, there are some principles such as don't repeat yourself and refactor large modules into smaller ones. If you ask any experienced programmer about these, he will tell you why these principles are so important and how they make life easier in the long run.

Is there anything that mathematicians frequently do to protect themselves from mistakes and to make things generally easier? If so, what are they and how do they work?


One analogue of the technique of "refactor[ing] large modules into smaller ones" is that of breaking up the proof of a long theorem into a sequence of lemmas. This makes it easier to follow the logical flow of the argument, and (hence) to identify possible weak points. It also allows one to use "locally defined variables and notation" (i.e. to introduce notation and such that are used only in the proof of the lemma, and not in the larger context of the proof of the theorem). Somewhat analogously to the situation in programming (I think), this can eliminate clutter from the overall proof of the theorem (having a lot of notation and auxiliary constructions hanging around in an argument can make it harder to follow).

The concept of "don't repeat yourself" (if I understand it correctly) also has an analogue, which again involves proving lemmas: mathematicians will often isolate various principles in technical lemmas, frequently stated in greater generality than what is needed for any particular application. Then, rather than grinding out closely related techniques again and again in an argument, one instead can deduce the desired results as particular applications of the general technical lemma. (Nakayama's lemma in commutative algebra is one such example, the Baire category theorem is another.)

This also has the advantage of making the proof flow easier to follow, because one can typically recognize when the author is manipulating things to put themselves in the context of some well-known general lemma, whereas ad hoc constructions and arguments take much more effort to parse. (Of course, any new proof will require some new ideas, which will require concentration and focus on the part of both the author and the reader, but one wants to reduce these moments to the minimum that is necessary, so that the effort of analyzing the proof can be directed to those few places where it is really necessary, rather than being diffused over the entire argument.)

A third technique that I'll mention, which is harder (for me) to make precise, is that of replacing computation by conceptual reasoning. What I mean, is that mathematicians introduce concepts that have intuitive underpinnings. (A simple one would be the concept of orthogonality of vectors, which has a purely algebraic definition in terms of vanishing of the dot product, but has a geomertic interpretation as the two vectors being at 90 degrees to one another. A more advanced example would be Galois's introduction of group theory to provide a conceptual analysis of the problem of solving polynomials, which eventually replaced the earlier, more formula-driven, approaches.) With such concepts in hand, then rather than reasoning by algebraic computations, one can try to reason in terms of (or, at least, structure the argument in accordance with) the meanings of the various concepts involved. This makes the arguments easier to follow, and so easier to check. (Of course, it also becomes a potential source of error, since it provides the opportunity to inadvertently substitute intuition --- which could be wrong --- for logic; but experience has shown (in my view) that the benefits of introducing conceptual reasoning outweigh the disadvantages.)


My standard method of finding serious mistakes is to find counterexamples. There are two main ways I find counterexamples: one is to look for extremely powerful conclusions that could be drawn, and the other is to test the result against a standard zoo of examples.

The first seems to me similar to asymptotic analysis of an algorithm. If the asymptotic runtime of the algorithm is shorter than that needed by a perfect algorithm (for instance, it runs so fast it cannot even read enough of the input to correctly decide the output), then something is wrong. Whenever I see a computational result, I ask how I could use the result to improve other algorithms. If the resulting combined algorithm is faster than possible, then there was a mistake somewhere.

The second seems a bit like testing, as in, sending pre-recorded inputs and comparing the computed outputs to pre-recorded outputs. In my zoos of examples, I know more or less everything about the examples, so I already have "correct" output for any possible result, and then I just check what the new result actually produces as output. Some busy programmers even begin by writing tests, and then only write enough to code to pass the tests. A few busy mathematicians begin with examples, and only prove enough theorems to understand the examples.

The main drawback of counterexamples is that they don't always exist. In some proofs by contradiction, one has already assumed the impossible, and so some minor step in the middle may be wrong, but there is no actual counterexample that has all of the pre-supposed properties (since nothing whatsoever has all of the pre-supposed properties). In programming this is not always a problem. If code functions on all inputs, and produces correct output, then it is normally considered correct, even if some of the code (possibly never called) is incorrect when taken in isolation. However, in mathematics, one generally does not tolerate mistakes in a proof, even if that part of the proof turns out to be superfluous. Such mistakes are easy to fix, but they still cast doubt on the whole.