The art of proof summarizing. Are there known rules, or is it a purely common sense matter?

When a proof is long and difficult, it can be really nice vis-à-vis the reader to give a summary or an outline of the deduction before beginning hard work.

Are there known rules to give a good proof summary?

Are there known rules to find which point to emphasize in a proof summary? I mean rules to find the "nervus probandi".


I agree with the commentors that this is a difficult question to answer, and what I write here may be only a partial answer. I've not encountered any set of guidelines for summarising a mathematical proof, but in general the same as are used for summarising any large or complex topic can be applied. (If you are interested in mathematical proof in particular you might want to look at https://www.math.wustl.edu/~sk/eolss.pdf which is a history of mathematical proof writing by Steven Krantz.)

A good summary tells the reader what to expect from the story(proof) that follows, should highlight any requisite knowledge, and should motivate the reader into making the effort to follow the argument. If you look at the Bourbaki style of proof, for example, there is often none of this -- a theorem is given and is proved and it is up to the reader to contextualise it, link it to previous work and knowledge and find a reason for remembering it. However if you look at some of Steven Krantz's books you will find that he actually spends the majority of a chapter motivating and explaining the ideas, and relegates the actual mathematical proof to the very end of the chapter -- the complete antithesis of a Bourbaki proof.

To write a good summary the author must understand the material thoroughly: in fact, being able to summarise a proof well is a good indication that the author has properly understood it. If at any point the author finds themselves waving their hands, or glossing over a detail, the chances are that there's something there they themselves don't have clear in their own mind.

As an example then: consider the stalwart of calculus lectures, the Intermediate Value Theorem. This says that if we have a continuous function defined on a contiguous set (a 'closed interval') of points, and one endpoint is smaller than zero while the other is greater than zero, then there is a point inside that interval where the function value is zero. This is nicely summarised by saying "the graph of a continuous function has no breaks in it". This immediately suggests a way to start thinking about it (draw a graph), it connects it to other things the reader already knows about (how to graph a function), and highlights that the reader should know what a continuous function is before proceeding.

EDIT: ruakh points out in the comments that I've summarised the theorem and not the proof, for which I apologise. To summarise the proof then:

Since $f$ is negative at some points and positive at others, and is continuous, we can show that the supremum of the set of negative points is both $0$ and is achieved by $f$. $0$ plays a key role here, so we can expect that we can find points where $f(x)=c$ by considering $f(x)-c$, and that we can find roots of polynomials by finding points $a$ and $b$ with $f(a)<0$ and $f(b)>0$ (which might lead us to the bisection method).

This achieves our goals in summarising: we know what to expect coming up (the study of the set $\{x: f(x)<0 \}$); we know we need to know what a supremum is, and we can see how we might use this in future.