Why do $n$ linearly independent vectors span $\mathbb{R}^{n}$?

Suppose we have $n$ linearly independent vectors $\mathbf{v}_{1}\ldots\mathbf{v}_{n}$ in $\mathbb{R}^{n}$. I know that they do span $\mathbb{R}^{n}$, because we can easily specify a non-singular map which sends the $\mathbf{v}_{i}$s to the standard basis, and then to whichever vector in $\mathbb{R}^{n}$ we choose.

My question is: do we need all the machinery of linear maps, determinants, etc. or is there a proof which is closer to the definitions? Every time I start writing down a proof I end up wanting to say "and this set of equations can be solved uniquely because this matrix is non-singular". Is this necessary?


You can prove as the first thing in Linear Algebra after the primary definitions that:

If there is a linear independent set $\{u_1,...,u_n\}$ and a set $\{v_1,...,v_m\}$ that generates the space, then $n \leq m$.

with an inductive-like argument.

Now, take your $n$ elements from $\mathbb{R}^n$ that are linearly independent. Suppose they don't generate $\mathbb{R}^n$. Take, then, another element of $\mathbb{R}^n$ that cannot be written as linear combination of those you had before. You will have a linearly independent set with $n+1$ elements, which is greater than $n$, and there is a set with $n$ element generating $\mathbb{R}^n$: the canonical vectors. This contradicts the lemma above.

For instance, see the book of Peter Lax.