Why did mathematicians take Russell's paradox seriously?

Solution 1:

Russell's paradox means that you can't just take some formula and talk about all the sets which satisfy the formula.

This is a big deal. It means that some collections cannot be sets, which in the universe of set theory means these collections are not elements of the universe.

Russell's paradox is a form of diagonalization. Namely, we create some diagonal function in order to show some property. The most well-known proofs to feature diagonalization are the fact that the universe of Set Theory is not a set (Cantor's paradox), and Cantor's theorem about the cardinality of the power set.

The point, eventually, is that some collections are not sets. This is a big deal when your universe is made out of sets, but I think that one really has to study some set theory in order to truly understand why this is a problem.

Solution 2:

My dad likes to tell of a quotation he once read in a book on philosophy of mathematics. He does not remember which book it was, and I have never tried to track it down; this is really hearsay to the fourth degree, so it may not even be true. But I think it is pretty apt. The quote describes a castle on a cliff where, after each a storm finally dies down, the spiders come out andrun around frantically rebuilding their spiderwebs, afraid that if they don't get them up quickly enough the castle will fall down.

The interesting thing about the quote was that it was attributed to a book on the logical foundations of mathematics.

First, note that you are looking at the problem from a perspective of someone who "grew up" with sets that were, in some way, carefully "built off each other in a nice way." This was not always the case. Mathematicians were not always very careful with their foundations: and when they started working with infinite sets/collections, they were not being particularly careful. Dedekind does not start from the Axiom of Infinity to construct the naturals and eventually get to the reals; and moreover, when he gives his construction it is precisely to try to answer the question of just what is a real number!

In some ways, Russell's paradox was a storm that sent the spiders running around to reconstruct the spiderwebs. Mathematicians hadn't been working with infinite collections/sets for very long, at least not as "completed infinities". The work of Dedekind on the reals, and even on algebraic number theory with the definitions of ideals and modules, was not without its critics.

Some mathematicians had become interested in the issues of foundations; one such mathematician was Hilbert, both through his work on the Dirichlet Principle (justifying the work of Riemann), and his work on Geometry (with the problems that had become so glaring in the "unspoken assumptions" of Euclid). Hilbert was such a towering figure at the time that his interest was itself interesting, of course, but there weren't that many mathematicians working on the foundations of mathematics.

I would think like Sebastian, that most "working mathematicians" didn't worry too much about Russell's paradox; much like they didn't worry too much about the fact that Calculus was not, originally, on solid logical foundation. Mathematics clearly worked, and the occasional antinomy or paradox was likely not a matter of interest or concern.

On the other hand, the 19th century had highlighted a lot of issues with mathematics. During this century all sorts of tacit assumptions that mathematicians had been making had been exploded. Turns out, functions can be very discontinuous, not just at a few isolated points; they can be continuous but not differentiable everywhere; you can have a curve that fills up a square; the Dirichlet Principle need not hold; there are geometries where there are no parallels, and geometries where there are an infinite number of parallels to a given line and through a point outside of it; etc. While it was clear that mathematics worked, there was a general "feeling" that it would be a good idea to clear up these issues.

So some people began to study foundations specifically, and try to build a solid foundation (perhaps like Weierstrass had given a solid foundation to calculus). Frege was one such.

And to people who were very interested in logic and foundations, like Frege, Russell's paradox it was a big deal because it pinpointed that one particular tool that was very widely used led to serious problems. This tool was unrestricted comprehension: any "collection" you can name was an object that could be manipulated and played with.

You might say, "well, but Russell's paradox arises in a very artificial context, it would never show up with a "real" mathematical collection." But then, one might say that functions that are continuous everywhere and nowhere differentiable are "very artificial, and would never show up in a 'real' mathematical problem". True: but it means that certain results that had been taken for granted no longer can be taken for granted, and need to be restricted, checked, or justified anew, if you want to claim that the argument is valid.

In context, Russell's paradox showed an entirely new thing: there can be collections that are not sets, that are not objects that can be dealt with mathematically. This is a very big deal if you don't even have that concept to begin with! Think about finding out that a "function" doesn't have to be "essentially continuous" and can be an utter mess: an entirely new concept or idea; and entirely new possibility that has to be taken into account when thinking about functions. So with Russell's, an entirely new idea that needs to be taken into account when thinking about collections and sets. All the work that had been done before which tacitly assumed that just because you could name a collection it was an object that could be mathematically manipulated was now, in some sense, "up in the air" (as much as those castle walls are "up in the air" until the spiders rebuild their webs, perhaps, or perhaps more so).

If nothing else, Russell's paradox creates an entirely new category of things that did not exist before: not-sets. Now you think, "oh, piffle; I could have told them that", but that's because you grew up in a mathematical world where the notion that there are such things as "not-sets" is taken for granted. At the time, it was the exact opposite that was taken for granted, and Russell's paradox essentially tells everyone that something they all thought was true just isn't true. Today we are so used to idea that it seems like an observation that is not worth that much, but that's because we grew up in a world that already knew it.

I would say that Russell's paradox was a big deal and wasn't a big deal. It was a big deal for anyone who was concerned with foundations, because it said "you need to go further back: you need to figure out what is and what is not a collection you can work with." It undermined all of Frege's attempt at setting up a foundation for mathematics (which is why Frege found it so important: he certainly had invested a lot of himself into efforts that were not only cast into doubt, but were essentially demolished before they got off the ground). It was such a big deal that it completely infuses our worldview today, when we simply take for granted that some things are not sets.

On the other hand, it did not have a major impact on things like calculus of variations, differential equations, etc., because those fields do not really rely very heavily on their foundations, but only on the properties of the objects they are working with; just like most people don't care about the Kuratowski definition of an ordered pair; it's kind of nice to know it's there, but most will treat the ordered pair as a black box. I would expect most of them to think "Oh, okay; get back to me when you sort that out." Perhaps like the servants living on the castle not worrying too much about whether the spiders are done building their webs or not. Also, much like the way that after Weierstrass introduced the notion of $\epsilon$-$\delta$ definitions and limits into calculus, and then re-established what everyone was using anyway when they were using calculus, it had little impact in terms of applications of calculus.

That rambles a bit, perhaps. And I'm not a very learned student of history, so my impressions may be off anyway.

Solution 3:

The difficulty can be hard to see from a modern point of view, where set theory books have been written in a way that avoids the problem. So imagine that you had never learned any set theory, but you are familiar with the English word "set". You probably think, as mathematicians did at the time, that any well-defined collection of objects forms a set, and any set is a particular well-defined collection of objects. This seems like a perfectly reasonable idea, but Russell's paradox shows that it's actually inconsistent. Moreover, the "set" that we now associate with Russell's paradox -- the set of all sets that don't contain themselves -- seems like a perfectly well-defined set. After all, each particular set is either a member of itself or not, so the ones that are not seem to be a well-defined collection. This casts doubt on the English language term "set", which in turn seems to cast doubt on all mathematics done in English. That was why the paradox was so revolutionary -- because things people had been routinely using turned out to be inconsistent, and it seemed possible that more paradoxes might be found that would cast doubt on other mathematical objects.

The solution that mathematicians eventually adopted was to drop the normal English meaning of set ("any well-defined collection of things") and replace it with a different concept, the "iterative" concept of sets. But it took some time for that solution to even be proposed, much less accepted. Even now, a key argument that the new conception is consistent is that nobody has managed to find an inconsistency. If that seems less than completely certain, that's because it is.

Several other paradoxes were discovered around the same time. One of the more interesting ones in my opinion, which was discovered somewhat later, is "Curry's paradox". It consists of this sentence: "If this sentence is true then 0=1". If you use the normal natural-language proof technique for proving an if/then statement, you can actually prove that sentence is true (it's not hard - try it). But if that sentence is true, then 0=1 - and you can prove that too, in the normal mathematical way. The fact that the normal techniques we use in everyday mathematics can prove 0=1 is certainly paradoxical, since each of those techniques seems fine on its own. Like Russell's paradox, Curry's paradox both casts doubt on our informal natural-language mathematics, and shows that certain formal theories that we might wish were consistent are actually inconsistent.

Solution 4:

The significance of Russel's paradox is not just philosophical. Russel's paradox implies a contradiction in the presence of the unbounded axiom of comprehension:

$\exists s. \forall e. e\in s \leftrightarrow \phi$

where $\phi$ is the logical formula typically containing $e$. This axiom “creates” the set $s$ of all elements $e$ satisfying $\phi$. In other words, it turns properties into sets.

Let $r$ be the set of elements $e$ satisfying $e\notin e$. Then $r$ satisfies: $\forall e. e\in r \leftrightarrow e\notin e$. Instantiate $e$ to $r$: $r\in r \leftrightarrow r\notin r$. $p:=r\in r$ for clarity. Use $(p\leftrightarrow (\neg p))\rightarrow \bot$ valid in intuitionistic propositional logic. We've got a contradiction.

If you suspect there is a gap in my proof, I formalized it in Coq, but the accompanying text is mostly in Russian. To prove a contradiction, I need building blocks, which I introduced via “Variable” — the universe “all_sets”, the relation “in_set”, and our honey “comprehension”.

P.S. Contradictions in pure Coq are not found. ;)