Why does linear dependence require a *finite* linear combination to vanish?

By definition, $S$ is linearly independent if for all $n>0,c_i\in F,s_i\in S$, we have that $c_1s_1 + \ldots + c_n s_n \neq 0$ whenever $\{c_i\}\neq \{0\}$.

Why do we restrict our attention to finite linear combinations? We could imagine a different definition:

Set $S$ is "infinitely linearly independent" if for any indexing set $I$ and $c_i\in F,s_i\in S$, we have that $\sum_{i \in I} c_is_i \neq 0$ whenever $\{c_i\}\neq \{0\}$.

Why isn't this definition, which allows arbitrary subsets of $S$, useful?


Solution 1:

In an infinite-dimensional space, you do sometimes do something very like this. But it's not quite as simple as you've put it so far. The basic problem:

How is $\sum_I c_is_i$ defined for $I$ infinite?

Well...to start out with the simplest case, how about $I=\mathbb{N}$? Then this is just summing a series, so we say $$\sum_{\mathbb{N}} c_is_i=\lim_{n\to\infty}\sum_{i=1}^n c_is_i$$ Now we see the subtleties begin to pop out: most to the point, what do we mean by that limit? Say our vector space was $\mathbb{R}$. Then the limit's just the $\varepsilon-\delta$ concept from advanced calculus; same thing for $\mathbb{R}^n$. But by remembering that over $\mathbb{R}$ many series have no sum, or have a sum that depends on the order of their terms, we realize there's another condition needed in the proposed definition of infinite linear independence: we'd better change it to "for any $I$ such that $\sum c_is_i$ converges..."

What's also come up is that I could only make sense of the infinite sum over $\mathbb{R}$, although I could over $\mathbb{Q}$ or $\mathbb{C}$ as well. So in a vector space over a general field $F$ there's no natural definition of an infinite sum: we need to decide on a notion of convergence and limits, which is essentially what it means to say we need a topology on $F$. Given such a topology, the spaces $F^n$ inherit a notion of convergence in the same way $\mathbb{R}^n$ does from that in $\mathbb{R}$.

Now this seems to be getting somewhere. Unfortunately as another answer mentioned this notion isn't of any use over a finite-dimensional vector space, since there aren't any infinite linear independent sets, so questions of infinite linear independence only really hit their stride when one begins to study infinite-dimensional vector spaces. The problem is that these spaces don't naturally inherit a topology from that on $\mathbb{R}$, instead, there are generally countless non-equivalent and reasonable choices for what "convergence" of an infinite sum means.

OK, fine, let's not get too far into that: what we've seen we need is an infinite-dimensional vector space $V$ over a field $F$ such that $F$ and $V$ both have topologies to be able to even talk about infinite sums. Let me now come from the other side with a very special case in which something like infinite linear independence is central to the study of certain vector spaces. The simplest infinite-dimensional example is the space $\ell^1$ of sequences $(x_i)_{i\in\mathbb{N}}$ of real numbers $x_i$ such that $\sum |x_i|<\infty$.

We can make sense of limits in $\ell^1$ almost exactly as we do in $\mathbb{R}$: avoiding setting too much notation, a sequence is Cauchy if eventually its differences have arbitrarily small $\sum|x_i|$. On a space like $\ell^1$ one gets a theory of bases formally very similar to that on $\mathbb{R}^n$ by requiring exactly that every vector be uniquely expressible as an infinite linear combination of basis elements. The basis here is even just like that in $\mathbb{R}^n,$ namely, the sequences $e_i$ that are $1$ in their $i$th place and $0$ elsewhere. Then it's intuitively unsurprising that the $e_i$ are infinitely linear independent and form a basis in the infinite sense. But they're nowhere close to a basis in the finite sense-a finite sum of them can never have infinitely many nonzero terms! In fact the most important infinite-dimensional vector spaces never have a countable basis in the finite-sums sense. So this notion of infinite linear independence is absolutely central for the study of infinite-dimensional vector spaces, which is a large part of the field called functional analysis.