Proof of "every finite dimensional vector space has a finite basis"

Finite dimensional implies the existence of a finite set that spans the vector space. Let V be one such vector space and let S be a finite set that spans V. The text I am following has a theorem that

Theorem 1: Any minimal spanning set of V is a basis of V.

Since we have a spanning set to begin with, we can keep on removing the linearly dependent vectors till we are left with a minimal spanning set which should then be a basis for that vector space.

However, the text I am following has the following theorem.

Theorem 2: Every linearly independent list of vectors in a finite-dimensional vector space can be extended to a basis of the vector space

Then the text mentions the following corollary

Corollary to Theorem 2: Every finite dimensional vector space has a finite basis

The proof is not given for the corollary. Is it really that straight forward? Does it involve something like the empty set of basis vectors, which by definition, is the basis of the set {0}, can be extended to a basis of V? That would then imply V has a basis.

I feel that something is missing.

All we know up until this point is the if a basis exists, then it is a minimal spanning set, maximal linearly independent set, and that any two sets basis vectors must have the same number of elements (which is where motivation to define dimension will start to emerge). We have not yet shown that a finite dimensional vector space has a basis and hence, we cannot assume that V has a finite basis.

So my question is how can we prove Theorem 2 without referring to any finite list of basis of V?

Line of proof for Theorem 2 given in the text: Let W be a subspace of V with basis vectors $\{w_1,w_2,...,w_k\}$. Choose a vector $v_{k+1}$ from V-W. Then the set $\{w_1,w_2,...,w_k,v_{k+1}\}$ is linearly independent. Let the span of this new set be $W_1$. Then choose any vector from V-$W_1$, say $v_{k+2}$, and add it to the set of linearly independent vectors to get the new set $\{w_1,w_2,...,w_k,v_{k+1},v_{k+2}\}$. We can keep on going like this but can the process go on forever?

This is where the text simply mentions that this process has to terminate because "the vector space is finite dimensional." To me, this is the statement that does not make sense. All we know is

  1. There is a finite set of vectors, say S, which spans V, and we know that
  2. There is a subset W of V with some basis, say $\{w_1,w_2,...,w_k\}$.

How can we use just the above facts (and maybe also some of the aforementioned theorems about basis vectors if they existed) to prove Theorem 2?

I would greatly appreciate feedback to the above query.


Solution 1:

You can do it using the following theorem:

Theorem: In a finite-dimensional vector space, the length of every linearly independent list of vectors is less than or equal to the length of every spanning list of vectors.

I culled this formulation of the theorem from this question, where it is quoted as Theorem 2.23 from Axler's Linear Algebra Done Right, but I think remember seeing something very similar in Beezer's A First Course in Linear Algebra -- it should be a standard theorem. The proof (see the linked question for details) doesn't rely on any concept of dimension or basis.

Once you have that theorem, the proof of Theorem 2 proceeds as follows. We construct a linearly independent list of vectors by the process the text describes. This process must terminate because we know that there is a finite list of spanning vectors (fact 1), and the list of linearly independent vectors cannot grow longer than that list of spanning vectors (by the theorem I quoted). Thus the list must terminate at some finite length. After the list terminates, it is a maximal linearly independent set, thus a basis.

The proof of the corollary is as you surmised. Start with the empty list, and extend it to a basis.

Solution 2:

hmmm, you might say, that a vector space is finite dimension, if and only if it has a finite basis.

A vector space is finite dimension, if there are a finite number of vectors that span the space.

Any linearly independent set of vectors that spans the space forms a basis.

So suppose you have $n$ vectors that span the space. They may or may not be linearly independent. If they are independent, you are done. If not, then there must be some $v_k$ such that $c_1v_1 + c_2v_2 + \cdots c_{i\ne k}v_{i\ne k} + \cdots + c_nv_n = v_k$

$v_k$ is superfluous. Get rid of it. Repeat. You can only repeat this finitely many times, before you have finite set of linearly independent vectors.

Solution 3:

You are right, starting from basic principles, it doesn't make sense to prove Theorem 2 without proving Corollary to Theorem 2 first! It is also non-trivial and takes a sequence of theorems to do so properly.

The exposition you refer to seems poor if not outright wrong.

I recommend you look at Linear Algebra by Friedberg et al 4th edition (see here) which fully addresses all of your questions. If you don't have access to the text, you should be able to find a digital copy at sci hub or lib gen.

Solution 4:

This is a little subtle and a little confusing.

I think what the book calls a Corollary to Theorem 2 is really a Corollary to Theorem 1: as you correctly argue, start with a finite spanning set (which exists by definition of "finite dimensional") and remove vectors one a time until what's left is independent, hence a basis and of course finite.

Now do you have enough information to finish the proof of Theorem 2 (the "can't go on forever" part)?

Note: defining the notion of "finite dimensional" before defining "dimension" is grammatically confusing: you have an adjective modifying a noun that hasn't yet been defined. I've not seen this usage before. Fortunately, you do seem to understand it.