Countable Product of Sequentially Compact spaces is Sequentially Compact

As you've undoubtedly noticed, you can't just argue as in the case of finite products, thinning out the sequence again and again to et convergence in more and more components. After any finite number of steps, you still have an infinite subsequence of your original sequence, but if you do infinitely many steps then every term of your original sequence might eventually get removed. Then, instead of having a subsequence at the end of the process, you've got nothing.

The idea of the diagonal argument is to slightly modify the process so that your sequence doesn't entirely disappear. Very roughly, you just restrain your thinning-out operations to ensure that an infinite subsequence remains at the end of the process. Here are the details:

Start with your original sequence, and, before doing any thinning, promise yourself that you will never delete the first of its terms; call that term $a_1$. Now thin out the sequence so that the first components converge, but, in accordance with your promise, keep $a_1$ in your new, thinned-out sequence. This does not harm the first-component-convergence. Keeping $a_1$ means that the sequence of first-components has one unavoidable term at the beginning, namely the first component of $a_1$, but one term at the beginning doesn't affect convergence.

So now you have your first thinned-out sequence, starting with $a_1$, and having its first-components converging. Now make a second promise, namely that the second term of this thinned-out sequence, which I'll call $a_2$, will never be deleted. Then thin out the sequence again, jut as in your finite-product proof, to make the sequence of second-components converge, but, while thinning it out, keep your two promises. That is, $a_1$ and $a_2$ are in this second thinned-out sequence. Again, you can do this because two terms at the beginning have no effect on convergence.

Continue in this way, alternating promises with thinnings. After $n$ steps, you have a subsequence of your original sequence with two crucial properties. (1) Its first, second, $\dots$, $n$-th components are convergent sequences, and (2) its first, second, $\dots$, $n$-th terms, which I'm calling $a_1,a_2,\dots,a_n$, will be the same in all future thinned-out sequences.

Now look at the infinite sequence $a_1,a_2,\dots$ consisting of the subjects of all your promises. For each $n$, its $n$-th components converge, because you have a subsequence of what you had after $n$ thinnings, and you ensured convergence of the $n$-th components at that stage.

This means that $a_1,a_2,\dots$ converges in the product topology. Since it's clearly a subsequence of the sequence you began with, the proof is complete.


You mentioned a "diagonal argument". Because the other answer did not use one (at least it doesn't explain the name), I'll present it here.

We consider a sequence of sequences, where each is a subsequence of the last. Define

$$ \mathbf{x_0} = x_1, x_2, \ldots $$

As the original sequence. Let

$$ \mathbf{x_1} = x_{11}, x_{12}, \ldots $$

be a subsequence of $\mathbf{x_0}$ such that the first coordinate converges---the sequence projected onto $X_1$ converges. This is possible from sequential compactness.

Similarly, for all $n\ge 2$, let $\mathbf{x_n}$ be a subsequence of $\mathbf{x_{n-1}}$ such that the $n$-th coordinate converges.

Now consider the sequence

$$ \mathbf{y} = x_{11}, x_{22}, x_{33}, \ldots $$

Formed by taking the "diagonal" of the array

$$ \begin{array}{llll} x_{11}& x_{12}& x_{13}& \ldots\\ x_{21}& x_{22}& x_{23}& \ldots\\ x_{31}& x_{32}& x_{33}& \ldots\\ \vdots& \vdots& \vdots& \ddots \end{array} $$

We claim this sequence converges in the product space. This is the "diagonal" of the diagonal argument.

Note that for $m < n$, $x_{nn}$ is an element of $\mathbf{x_m}$, and it comes strictly after $x_{mm}$ in $\mathbf{x_m}$. Thus,

$$ x_{11}, x_{22}, \ldots $$

is a subsequence of $\mathbf{x_1}$, so $\mathbf{y}$ converges in the first coordinate. Similarly,

$$ x_{22}, x_{33}, \ldots $$

is a subsequence of $\mathbf{x_2}$, so $\mathbf{y}$ converges in the second coordinate. Continuing like this, we see that $\mathbf{y}$ converges pointwise in all coordinates.

The argument where you arrange a nested sequence of sequences into an array and take the diagonal is used elsewhere, for example in the proof of the Kolmogorov Extension Theorem.