Can a proof be just words? [closed]

Exactly as thorough as you would have to be using any other kinds of symbols. It's just that vast messes of symbols are hellish for humans to read, but sentences aren't. Adding symbols to something doesn't make it more rigorous, less likely to be wrong, or really anything else. Symbols are useful for abbreviating in situations where this adds clarity, and making complex arguments easier to follow, but shouldn't be used where they do not help in this regard.


Yes they can and I'm of the opinion that symbolism and notation should be avoided unless it serves to simply the presentation of the material or to perform calculations. For example you want to cut a cube so that each face has a three by three grid of smaller cubes similar to the Rubix cube and with a little thought and experimentation once might conjecture that six is the minimal number of cuts. The best proof of this that I know of is simply "Consider the faces of the center cube." They require six cuts because there are six faces and it follows immediately. No symbols or calculation but still logical and mathematically sound.


Natural language for expressing mathematical statements can be indeed vague and ambigous. However, when you study mathematics, one thing you will usually learn at the beginning is how to use mathematical terminology in a rigid, unambigous way (at least for communication with other people trained in mathematical terminology). This process takes usually some time if you are not a genius (I guess it took me about two years at the university until I became reasonable fluent), so unfortunately I fear I cannot tell you a small set of rules which kind of language is "right" for mathematical proofs, and which is "wrong". This is something you can only learn by practicing.

Hence, the answer is IMHO "yes, words are fine, when used correctly by a trained expert". (Amazingly, one could say the same about more formal proofs using symbols.)

Note that historically, before the 18th century, proofs using natural language was the de facto standard in mathematics. Most of the symbolic notation we usually use today was developed in the 18th and 19th century.


Two points:

(i) Historically, all proofs were done in words—the use of standardised symbols is a surprisingly recent development. This is obscured a bit because a modern edition of, say, Euclid's Elements is likely to have had the words translated into modern notation.

(ii) Before symbols can be used they have to be defined, and ultimately that definition will be in words. It's easy to forget this, especially with ones that we use all the time and learnt in childhood. But, for example, we once had to learn that $2+3=5$ was short for "Two things together with three things is the same as five things".

Though a lot of us learnt instead that $2+3=5$ meant "Three things added to two things makes five things".

Now, these two definitions are different. One makes $2+3$ into an operation done to $2$, and treats $=$ as an instruction to carry it out; the other says that the number on the right has the same value as the expression on the left. The notation, though, doesn't make this distinction, and it's possible to spend years using the $=$ sign as though it meant "put the result of the operation on the left on the right".

So in this case we've got one string of symbols ($2+3=5$) a correct definition and a misleading definition. And how do we clarify the correct meaning of the symbols? By choosing which verbal definition to use. The precision is in the words (at least if they're well chosen).

Of course, more advanced symbols will most likely have some mathematical symbols in their definitions—but ultimately, we'll get back to words.


For your particular example:

Just keep distributing $A$ over and over ad nauseum and you get the term on the right.

would not be a convincing proof. This is not because it is in words, however -- words are perfectly fine.

But it fails to convince because the intersection is over an infinite family of sets. Your proposal would work fine for a finite intersection, in that it gives a recipe for constructing an algebraic proof that would itself be convincing. And in ordinary mathematics a convincing recipe for a convincing proof is itself as good as the real thing.

But for an infinite intersection, the algebraic calculation you're describing never ends! No matter how many steps you do, there will still be an intersection of infinitely many $A_i$s that have yet to be distributed over in your expression. So your recipe does not lead to a finite proof, and infinite things (to the extent they are "things" at all) are not convincing arguments.


There are ways to convert some cases of infinitary intuition into actual convincing proofs, but they have subtle pitfalls, so you can't get away with using them -- no matter whether with words or with symbols -- unless you also convince the reader/listener that you know what these pitfalls are and have a working strategy for avoiding them. Typically this means you need to explicitly describe how you handle the step from "arbitrarily but finitely many" to "infinitely many" (or in more sophisticated phrasing: what do you do at a limit ordinal?).

A somewhat unheralded part of mathematics education is that over time you will get to see sufficiently many examples of this that you collect a toolbox of "usual tricks". When communicating in a situation where you trust everyone knows the usual tricks you can often get away with not even specifying which trick you're using, if everybody present is experienced enough to see quickly that there's one of the usual tricks that will obviously work.