Why do elementary row operations preserve linear dependence between matrix columns?

I know that I can find a basis for the column space of a matrix $A$ by reducing the matrix to reduced row echelon form $J$. The columns of $A$ corresponding to the linearly independent columns of $J$ then form a basis for $Col(A)$, because linear dependence is preserved under elementary row operations. I can't figure out why this is true though, and a google search returns nothing, so I'm sure it's simple. Can someone give me a proof?


Solution 1:

Let's start out from the standard basis $e_1,..,e_n$. Let $a_1,..,a_k$ be the column vectors of $A$.

Check that the step on rows $r_i':=r_i+\lambda\,r_j$ corresponds to the basis transformation $e_j':=e_j-\lambda\,e_i$, that is, for a vector $v$ we have $$v=\sum_i\alpha_ie_i=\sum_i\alpha_i'e_i'$$ where the row transformation is made for the coordinate vector $\pmatrix{\alpha_1\\ \alpha_2\\ \vdots} \leadsto \pmatrix{\alpha_1'\\ \alpha_2'\\ \vdots}$.

So, in this interpretation the column vectors all "stay" where they are in the $n$ dimensional space, but we keep on changing the basis. Of course, the vectors stay (in-)dependent. The crucial thing here is that we get another basis when applying (the inverse of) each step of the row transformation.

Solution 2:

Here is a geometric way of looking at it (it's not a rigorous proof, but gives a nice way to understanding, non-algebraically, why it's true): The geometric meaning of the determinant of a matrix is that it is the algebraic volume of the parallelepiped spanned by the columns (or rows) of the matrix (algebraic volume means that the volume can be negative, depending on the orientation of the parallelepiped). By volume here we mean $n$-dimensional volume. Thus, the determinant is $0$ iff the volume is $0$ iff the columns span a degenerate parallelepiped iff the columns (rows) are dependent.

Now, row operations have clear effects on the parallelepiped. Interchanging rows just reverses the orientation. Non-zero scalar multiplication lengthens or shortens the parallelepiped in one direction. Adding a row to another one happens in a two dimensional section of the entire space. Intuitively, all of these operations can't turn a degenerate parallepiped into a nondegenerate one, or vice versa. Thus, preserving the non-vanishing of the determinant.

Solution 3:

Take your m x n matrix A, row reduce it to A', and determine that its rank is r. Form a new m x r matrix B' from the linearly independent pivot columns of A'. Note that the rank of B' is also r. Now run your row reduction sequence backwards on B' to produce B, which is composed of the original pivot columns of A. The rank of B is also r, which means that the r columns of B are linearly independent.