What does it REALLY mean for a metric space to be compact? [duplicate]

I've been trying to wrap my head around the concept of compactness and get an intuitive understand of what it is. The definition used in my text book is the finite subcover definition.

A subset $K$ of a metric space $X$ is said to be compact if every open cover of $K$ contains a finite subcover. More explicitly, the requirement is that if {$G_{\alpha}$} is an open cover of $K$, then there are finitely many indices $\alpha_1,...,\alpha_n$ such that $$K\subset G_{\alpha_{1}}\cup\, ...\cup \, G_{\alpha_{n}}.$$

This definition is not very accessible to me so I've been looking around trying to find something to help me understand it.

So far, I haven't really wrapped my head around the idea yet, but I have learned the following:

  1. Compactness is a kind of limited-ness.

  2. Compactness is one of the two properties of finiteness, the other being discreteness. (I saw this in the explanation about foos, the creatures that are red and short, and the word foo has come to mean something both red and short.)

  3. In $R^k$, compactness is equivalent to being closed and bound.

So I guess my question is, what is it about compactness that led mathematicians to call it "compact"? What exactly is compact about it? What does this have to do with the definition (that is, where does the definition come from)? Furthermore, what does it mean to be discrete? I think it would help if you could give me an example of metric spaces that are:

  1. Compact and discrete

  2. Compact but not discrete

  3. Discrete but not compact

  4. Neither discrete or compact

I've already read this question: What should be the intuition when working with compactness?

The answers on this post explain very well why it is difficult to understand compactness, but I was hoping for something more concrete to help me understand.


Solution 1:

Consider the following as subspaces of $\mathbb R$

  1. $\{0,1\}$ or in fact any finite set is compact and discrete
  2. $[0,1]$ is compact but not discrete.
  3. $\mathbb Z$ and $\left\{\frac1n:n\in\mathbb N\,\right\}$ are discrete but not compact. (But $\left\{\frac1n:n\in\mathbb N\,\right\}\cup\{0\}$ is compact and not discrete)
  4. $(0,1)$ and $\mathbb Q$ are neither discrete nor compact.

Solution 2:

Your examples:

  1. The discrete metric space on a finite set is compact.
  2. Closed bounded sets in $\mathbb{R}^n$ are compact.
  3. The discrete metric space on an infinite set is not compact.
  4. Many examples in $\mathbb{R}^n$ are available here, but open balls are probably the most easily visualized.

If your goal is to study metric spaces rather than topological spaces, then I suggest you consider the following three statements. The first two are definitions. The third is usually called a theorem (since "compact" is usually already defined to mean "sequentially compact" or "topologically compact").

  1. A metric space $X$ is complete if and only if every Cauchy sequence in $X$ converges to a point in $X$.

  2. A metric space $X$ is totally bounded if and only if for every $\epsilon > 0$ there exist balls $B_1,\dots,B_n$ centered at $x_1,\dots,x_n \in X$ and with radius at most $\epsilon$, such that $B_1,\dots,B_n$ cover $X$. We call such a collection of balls a $\epsilon$-net for $X$.

  3. A metric space $X$ is compact if and only if it is complete and totally bounded.

This formulation is easier to intuit, in my opinion. The completeness says you can't "escape" $X$ along a sequence which is otherwise "trying" to converge. For example, $[0,2] \cap \mathbb{Q}$ is not compact because we can "escape" it along a sequence converging to $\sqrt{2}$.

You can think of total boundedness as kind of a combination of boundedness and "manageability". Here are three ways to view this "manageability", in increasing order of rigor:

  1. There are only finitely many "independent directions" in which one can go in the set. (In the context of normed linear spaces, the Riesz lemma makes this statement precise.)
  2. Although an "interesting" compact metric space is not finite, if we reduce our resolution so we can only distinguish points which are more than $\epsilon$ apart, then it becomes finite (because the balls, to our low-resolution eyes, are now points).
  3. By the pigeonhole principle, if you have a sequence in $X$ and a $\epsilon$-net, then infinitely many terms of the sequence must be in one of the balls in the $\epsilon$-net. Thus by taking $\epsilon=1,1/2,1/3,\dots$ and diagonalizing, we find that every sequence has a Cauchy subsequence. If we have completeness then this sequence must in fact be convergent.

Solution 3:

Here is one way I like to think of it:

Suppose you're trying to cover an infinite compact set, and you really want to give it an infinite cover that doesn't have a finite subcover. So you get a collection of infinite sets that looks like it nearly covers everything - maybe you've left behind a countable subset of an uncountable set or something. You must only have a little bit left to do, right? Well compactness say that 'little bit' does almost all the work - in the sense that once you add the little bit to your pseudo-cover, you can throw away almost all the work you had done before!

Let me explain with an example. Consider $(0,1)$. Then the collection $\{(\frac{1}{n+1},\frac{1}{n}),n \in \mathbb{N}\}$ almost covers $(0,1)$, but you're missing the endpoints! Can we fix it without allowing a finite subcover? Sure! You can add in tiny, mutually disjoint open intervals that enclose the endpoints and you certainly can't have a finite subcover since every set contains a point no other set contains (it's messy to write it down, but intervals that connect the midpoints of the intervals you've already used should do).

Now consider $[0,1]$. Surely it would be absurd to suggest we can't do a similar trick?! Well, we can't, because it's compact. There's no way that even the cleverest person in the world could give you a really weird infinite covering of open sets that would not allow a finite subcover.

Worse still, we can add $0$ or $1$ independently and still be clever. You need to add both of them to fool everyone.