Why we use the word 'compact' for compact spaces?
Solution 1:
There is already an answer of @Chappers, but I wanted to remark on the word "compact" and why it can be considered suitable for its purpose. Perhaps you will find it relevant.
In my opinion, compact is a very good term – compact spaces really are the spaces that are closely and neatly packed together, however, not in the common literal meaning of the phrase. Perhaps anybody who tries to apply standard intuitions of these concepts is going to be confused, at least I know I was (but thanks to that I was able to arrive at my current intuition $\ddot\smile$).
For example, the real line is not compact, but we can adjoin two points to get $\mathbb{R}\cup \{-\infty, \infty\}$, and suddenly it is compact. But how can adjoining more points make a larger thing small? How can a large (i.e. non-compact) space ever be embedded in a small (i.e. compact) space? Weirder still, the open interval $(0,1)$ is non-compact, despite being apparently much smaller than all the spaces we have so far considered. But once again, we can add two points to get the compact space $[0,1].$ What's going on here?
The point is that despite the difference in apparent length or size (or whatever you wish to call it), the spaces $[0,1]$ and $\mathbb{R} \cup \{-\infty,\infty\}$ are topologically equivalent. Similarly, spaces $(0,1)$ and $\mathbb{R}$ are also topologically equivalent. Furthermore, each of these spaces can be embedded into each other. So we need a notion of closeness that takes that into account. In other words, no ordinary intuition (i.e. the common understanding of two points being close or neatly packed together) will suffice. Compactness solves this problem.
To make my point I will use two conditions equivalent to compactness:
- A space is compact if every open cover has a finite subcover.
- A space is compact if every net has a converging subnet (nets are generalizations of sequences).
On covers:
Somebody could say: so a compact set has a finite open cover, big issue, $\mathbb{R}$ does have one too! But $(1)$ is much more than that, its every open cover has a finite subcover, and the fact that not a single one was left is important. You can think of it in the following way:
Suppose that we have a compact connected space and that you were to tell me which points in your opinion are close to each other and which are far apart. You do this by covering the space with open sets small enough to satisfy your sense of closeness, so that points which are far apart do not belong to the same open set. Yet, for any such cover I can pick a finite subcover, which means that the distance in units you care about (i.e. the open sets) between any two points of space is smaller than some constant, hence, close to each other (this is somehow similar to how "almost all" may mean "all but a finite number" despite the finite number being big).
Consider the $(0,1)$ open interval: it may seem small, but you can specify your open sets in a way so that the closer you get to $0$, the father apart your points will be (in terms of number of open sets needed to connect them). On the other hand take the the extended real line, you may go on with bigger and bigger numbers, but you have to finally fall into the open set containing $+\infty$, and you will do it in a finite number of steps.
On nets:
The nets are a generalization of sequences, so make this more approachable, let me describe this in terms of sequences, just remember that sequential compactness and compactness are not equivalent (although they are for metric spaces).
Consider a compact space and a sequence of elements of it. You could imagine walking around and visiting different places of that space. We know that it has a convergent subsequence, so, in other words, there has to be a place the neighborhood of which you visit infinitely many times. That means, if you walk long enough, you will have to come back to some neighborhood you have already been to. Such a space has to be rather small, right?
On the other hand, if you were to consider a sequence that does not have any converging subsequence, e.g. $1, 2, 3, 4, \ldots$, then you can go on and on, but the space is not compact anymore - it is not "closely and neatly packed together". Similarly with $(0,1)$, you can pick the neighborhoods (cover) and a specific pattern of walking (dependent on the neighborhoods) so that you won't visit any neighborhood twice. Such a space is not small, but neither it is compact.
Putting this in terms of nets instead of sequences (which means that we are doing steps along arbitrary directed set, not just the natural numbers, for example you could make infinitely many infinitely small steps) might be confusing, but I think it still gives some intuition why you can think of compact spaces are "closely and neatly packed together".
I hope this helps $\ddot\smile$
Solution 2:
Fréchet was the one to coin the term, but according to this page from Earliest Uses of Some Words in Mathematics, even he didn't remember why he used it (and also according to that entry, some mathematicians didn't like the term at the time).
But we're stuck with it now!
(I'd like to find another source on this, but I doubt there's much about: even History of Topology doesn't give reasons or any idea of the reception it received. (As with most histories of mathematics, it seems to have precious little history in it.) )