Two definitions of $\limsup$

The first definition has the great virtue of exactly matching the notation: it defines $\limsup_na_n$ to be the limit (as $n\to\infty$) of the suprema (of the tails of the sequence). Since the behavior of a sequence is determined by its tails, this is a very natural thing to consider. Loosely speaking, it’s what the supremum of the sequence ‘ought’ to be if we could ignore the more or less meaningless fluctuations in every finite initial segment. We can’t quite do that literally, because there can be such fluctuations arbitrarily far out in the sequence, but we can do it in the limit. Let $u=\limsup_na_n$; any given tail of the sequence may have supremum larger than $u$, but if it does, a later tail will have a smaller supremum, having squeezed out more of the meaningless ‘early’ fluctuation. Note that because the suprema of the tails form a non-increasing sequence,

$$\limsup_na_n=\lim_{n\to\infty}\sup_{k\ge n}a_k=\inf_{n\in\Bbb N}\sup_{k\ge n}a_k\;.\tag{1}$$

This definition also generalizes relatively easily to sequences in arbitrary complete lattices, which have notions of supremum and infimum of arbitrary sets of elements. In particular, if $X$ is a set, $\wp(X)$ is a complete lattice with $\bigcup$ as supremum and $\bigcap$ as infimum. Let $\langle A_n:n\in\Bbb N\rangle$ be a sequence of subsets of $X$. A first attempt to generalize the first definition might be

$$\limsup_nA_n=\lim_{n\to\infty}\bigcup_{k\ge n}A_k\;,\tag{2}$$

but we don’t (yet) have a notion of the limit of a sequence of sets. The last expression in $(1)$, however, does the trick nicely: we can meaningfully define

$$\limsup_nA_n=\inf_{n\in\Bbb N}\bigcup_{k\ge n}A_k=\bigcap_{n\in\Bbb N}\bigcup_{k\ge n}A_k\;.$$

Better yet, we can see that it has the same general effect of getting rid of essentially meaningless initial fluctuations: the union of any given tail may be bigger than $\limsup_nA_n$, but if it is, a later tail will have a smaller union, having squeezed out more of the points that are in only finitely many of the $A_n$.

The second definition expresses a very important property of the limit superior of a sequence of real numbers, but I think that the first gives easier access to the various more general notions of limit superior.


I think a more proper definition of $\limsup$ is

$$ \limsup_{n\to\infty} a_n = \inf_{n \ge 1} \sup\{a_n, a_{n+1}, \ldots\} $$

because you need $\limsup$ and $\liminf$ to define $\lim$.

The way I think of $\limsup$ is the limit of the "upper envelope" of the sequence. Like when you have $a_n = \left(1 + \frac 1n\right)\sin n$, $\limsup_{n \to \infty} a_n = 1$ and $\liminf_{n \to \infty} a_n = -1$, but the limit doesn't exist.


I vaguely thought that Rudin did say explicitly that the limsup of a sequence is the supremum of its limit points. Could be wrong.

As for the first definition, you should think of the $u_n$ as being "the biggest element this side of the river". If you keep moving the river, you're left with only the biggest element. Of course this is very imprecise, but it helps me.


$\limsup_{n\to\infty} x_n $ and $\liminf_{n\to\infty} x_n $ naturally arise when trying to understand convergent subsequences of the sequence $ (x_n) $. Here's the idea ( We'll be looking at real sequences throughout ) :

Convergent sequences have nice properties ( e.g. boundedness, $ \lim_{n\to\infty} (xa_n + yb_n) = x(\lim_{n\to\infty} a_n) + y(\lim_{n\to\infty} b_n) $, etc ), and the notion is quite central since many other notions can be restated in terms of this ( e.g. "$ f : A (\subseteq \mathbb{R} ) \rightarrow \mathbb{R} $ is continuous at a point $ p \in A $" if and only if "for every sequence $ (a_n) $ in $ A $ converging to $ p $ we have $ f(a_n) $ converging to $ f(p) $" ). In general a sequence doesn't converge, but the next best thing we can have is a convergent subsequence (Recall a subsequence of $ x_1, x_2, x_3, \cdots $ is just a sequence $ x_{j_1}, x_{j_2}, x_{j_3}, \cdots $ with $ j_1 < j_2 < j_3 < \cdots $ ). When does a sequence $ (x_n) $ have a convergent subsequence ? And when it does, what can we say about $ \{ \text{ limits of convergent subsequences of } (x_n) \: \} $ ?

Let's first focus on the second question, i.e. on what happens if a sequence $ (x_n) $ does have a convergent subsequence $ (x_{n_k}) $. Assuming boundedness of $ (x_n) $ we have $ \beta_{n_k} \leq x_{n_k}, x_{n_{k+1}}, x_{n_{k+2}}, \cdots \leq \alpha_{n_k} $ where $ \alpha_j := \sup \{ x_j, x_{j+1}, x_{j+2}, \cdots \} $ and $ \beta_j := \inf \{ x_j, x_{j+1}, x_{j+2}, \cdots \} $. Since $ (\alpha_j) $ is non-increasing & bounded below it has a limit $ \alpha = \inf \alpha_j $, and similarly $ (\beta_j) $ being non-decreasing & bounded above has a limit $ \beta = \sup \beta_j $. Now taking $ k \to \infty $ gives $ \beta \leq \lim_{k\to\infty} x_{n_k} \leq \alpha $.

To summarise : Let $ (x_n) $ be a bounded sequence with a convergent subsequence $ (x_{n_k}) $. Then $ \lim x_{n_k} \in [ \beta, \alpha ] $, where $ \alpha $ and $ \beta $ are the limits of $ \alpha_j := \sup \{ x_j, x_{j+1}, \cdots \} $ and $ \beta_j := \inf \{ x_j, x_{j+1}, \cdots \} $ respectively.

Now let's tackle the first question ( notice in the above discussion we only needed boundedness of $ (x_n) $ to talk about $ \alpha_j, \beta_j $ and their limits $ \alpha $, $ \beta $ ). If $ (x_n) $ is a bounded sequence, because intuitively "there are terms of sequence sticking near $ \alpha_j $s, and the $ \alpha_j $s converge to $ \alpha $" and similarly for $ \beta $, we expect there are subsequences converging to $ \alpha $ & $ \beta $. It's actually true : Let $ (x_n) $ be a bounded sequence. As usual $ \alpha_j := \sup \{ x_j, x_{j+1}, \cdots \} $, $ \beta_j := \inf \{ x_j, x_{j+1}, \cdots \} $, with respective limits $ \alpha = \inf \alpha_j $ and $ \beta = \sup \beta_j $. There exists an $ n_1 $ such that $ \alpha_1 - 1 < x_{n_1} \leq \alpha_1 $. Now there exists $ n_2 ( \geq n_1 + 1 > n_1 ) $ such that $ \alpha_{{n_1} + 1} - \frac{1}{2} < x_{n_2} \leq \alpha_{{n_1} + 1} $. And now there exists $ n_3 ( \geq n_2 + 1 > n_2 ) $ such that $ \alpha_{{n_2}+1} - \frac{1}{3} < x_{n_3} \leq \alpha_{{n_2}+1} $, and so on. Therefore we get $ n_1 < n_2 < \cdots $ such that $ \alpha_{{n_{j-1}}+1} - \frac{1}{j} < x_{n_{j}} \leq \alpha_{{n_{j-1}}+1} $ for all $ j \geq 1 $ ( we'll take $ n_0 = 0 $, to make sense of the $ j = 1 $ inequality ). So taking $ j \to \infty $, we finally get $ \lim_{j \to \infty} x_{n_j} = \alpha $. Similarly one can construct a subsequence of $ (x_n) $ converging to $ \beta $.

To summarise the entire discussion : Let $ (x_n) $ be a bounded sequence. Then $ S := \{ \text{ limits of convergent subsequences of } (x_n) \: \} $ is non-empty, with $ \max(S) = \alpha $ and $ \min(S) = \beta $, where $ \alpha = \inf \alpha_j $ and $ \beta = \sup \beta_j $ are the limits of $ \alpha_j := \sup \{ x_j, x_{j+1}, \cdots \} $ and $ \beta_j := \inf \{ x_j, x_{j+1}, \cdots \} $ respectively.

Remark : Here the fact that $ S \neq \varnothing $, i.e. that every bounded sequence of reals has a convergent subsequence, is traditionally called Bolzano-Weierstrass theorem. It is central to Analysis, and can also be proved by a bisection argument.