Why use Gauss Jordan Elimination instead of Gaussian Elimination, Differences

Why use Gaussian Elimination instead of Gauss Jordan Elimination and vice versa for solving systems of linear equations? What are the differences, benefits of each, etc.?

I've just been solving linear equation systems, of the form Ax = B, by reducing matrix A to a diagonal matrix where every nonzero value equals 1. I'm not exactly sure if this would be considered Gaussian Elimination or Gauss-Jordan Elimination.


Gaussian Elimination helps to put a matrix in row echelon form, while Gauss-Jordan Elimination puts a matrix in reduced row echelon form. For small systems (or by hand), it is usually more convenient to use Gauss-Jordan elimination and explicitly solve for each variable represented in the matrix system. However, Gaussian elimination in itself is occasionally computationally more efficient for computers. Also, Gaussian elimination is all you need to determine the rank of a matrix (an important property of each matrix) while going through the trouble to put a matrix in reduced row echelon form is not worth it to only solve for the matrix's rank.

EDIT: Here are some abbreviations to start off with: REF = "Row Echelon Form". RREF = "Reduced Row Echelon Form."

In your question, you say you reduce a matrix A to a diagonal matrix where every nonzero value equals 1. For this to happen, you must perform row operations to "pivot" along each entry along the diagonal. Such row operations usually involve multiplying/dividing by nonzero scalar multiples of the row, or adding/subtracting nonzero scalar multiples of one row from another row. My interpretation of REF is just doing row operations in such a way to avoid dividing rows by their pivot values (to make the pivot become 1). If you go through each pivot (the numbers along the diagonal) and divide those rows by their leading coefficient, then you will end up in RREF. See these Khan Academy videos for worked examples.

In a system $Ax=B$, $x$ can only be solved for if $A$ is invertible. Invertible matrices have several important properties. The most useful property for your question is that their RREF is the identity matrix (a matrix with only 1's down the diagonal and 0's everywhere else). If you row-reduce a matrix and it does not become an identity matrix in RREF, then that matrix was non-invertible. Non-invertible matrices (also known as singular matrices) are not as helpful when trying to solve a system exactly.


The following example, part of Finding a sequence of elementary matrices, compliments the insight of @Xoque55


The target matrix is $$ \left[ \begin{array}{cc|cc} 2 & 4 \\ 1 & 1 \\ \end{array} \right] $$

Use elementary row operations for Gaussian elimination. $\color{blue}{Blue}$ coloring denotes changed entries in the output matrix.


Row echelon form

Form the augmented matrix

$$ \left[ \begin{array}{c|c} \mathbf{A} & b \end{array} \right] = \left[ \begin{array}{cc|c} 2 & 4 & b_{1} \\ 1 & 1 & b_{2} \\ \end{array} \right] $$

Normalize row 1: $$ \left[ \begin{array}{cc} \frac{1}{2} & 0 \\ 0 & 1 \\ \end{array} \right] % \left[ \begin{array}{cc|c} 2 & 4 & b_{1} \\ 1 & 1 & b_{2} \\ \end{array} \right] = \left[ \begin{array}{cc|c} \color{blue}{1} & \color{blue}{2} & \frac{1}{2}b_{1} \\ 1 & 1 & b_{2}\\ \end{array} \right] $$

Clear column 1 $$ \left[ \begin{array}{rc} 1 & 0 \\ -1 & 1 \\ \end{array} \right] % \left[ \begin{array}{cc|c} \color{blue}{1} & \color{blue}{2} & \frac{1}{2}b_{1} \\ 1 & 1 & b_{2}\\ \end{array} \right] = \left[ \begin{array}{cr|c} 1 & 2 & \frac{1}{2}b_{1} \\ \color{blue}{0} & \color{blue}{-1} & b_{2} - \frac{1}{2}b_{1} \\ \end{array} \right] $$

The system can be solved via back substitution.

Reduced row echelon form

The reduction process is $$ % \left[ \begin{array}{c|c} \mathbf{A} & \mathbf{I} \end{array} \right] % \qquad \Rightarrow \qquad % \left[ \begin{array}{c|c} \mathbf{E_{A}} & \mathbf{R} \end{array} \right] $$

Form the augmented matrix

$$ \left[ \begin{array}{c|c} \mathbf{A} & \mathbf{I} \end{array} \right] = \left[ \begin{array}{cc|cc} 2 & 4 & 1 & 0 \\ 1 & 1 & 0 & 1 \\ \end{array} \right] $$

Normalize row 1: $$ \left[ \begin{array}{cc} \frac{1}{2} & 0 \\ 0 & 1 \\ \end{array} \right] % \left[ \begin{array}{cc|cc} 2 & 4 & 1 & 0 \\ 1 & 1 & 0 & 1 \\ \end{array} \right] = \left[ \begin{array}{cc|cc} \color{blue}{1} & \color{blue}{2} & \frac{1}{2} & 0 \\ 1 & 1 & 0 & 1 \\ \end{array} \right] $$

Clear column 1 $$ \left[ \begin{array}{rc} 1 & 0 \\ -1 & 1 \\ \end{array} \right] % \left[ \begin{array}{cc|cc} 1 & 2 & \frac{1}{2} & 0 \\ 1 & 1 & 0 & 1 \\ \end{array} \right] = \left[ \begin{array}{cr|rc} 1 & 2 & \frac{1}{2} & 0 \\ \color{blue}{0} & \color{blue}{-1} & -\frac{1}{2} & 1 \\ \end{array} \right] $$

Normalize row 2 $$ \left[ \begin{array}{cr} 1 & 0 \\ 0 & -1 \\ \end{array} \right] % \left[ \begin{array}{cc|cr} 1 & 2 & \frac{1}{2} & 0 \\ 0 & 1 & \frac{1}{2} & -1 \\ \end{array} \right] = \left[ \begin{array}{cc|cr} 1 & 2 & \frac{1}{2} & 0 \\ \color{blue}{0} & \color{blue}{1} & \frac{1}{2} & -1 \\ \end{array} \right] $$

Clear column 2 $$ \left[ \begin{array}{cr} 1 & -2 \\ 0 & 1 \\ \end{array} \right] % \left[ \begin{array}{cc|cr} 1 & 2 & \frac{1}{2} & 0 \\ 0 & 1 & \frac{1}{2} & -1 \\ \end{array} \right] = \left[ \begin{array}{cc|rr} \color{blue}{1} & \color{blue}{0} & -\frac{1}{2} & 2 \\ 0 & 1 & \frac{1}{2} & -1 \\ \end{array} \right] $$ Result $$ \left[ \begin{array}{c|c} \mathbf{E_{A}} & \mathbf{R} \end{array} \right] = \left[ \begin{array}{cc|rr} 1 & 0 & -\frac{1}{2} & 2 \\ 0 & 1 & \frac{1}{2} & -1 \\ \end{array} \right] $$

The solution is $$ \mathbf{A} x = b \quad \Rightarrow \quad x = \mathbf{A}^{-1} b \quad \Rightarrow \quad x = \frac{1}{2}\left[ \begin{array}{rr} -1 & 4 \\ 1 & -2 \\ \end{array} \right] % \left[ \begin{array}{c} b_{1} \\ b_{2} \\ \end{array} \right] \quad \Rightarrow \quad x = \left[ \begin{array}{l} -\frac{1}{2} b_{1} + 2b_{2} \\ \phantom{-}\frac{1}{2} b_{1} - b_{2} \\ \end{array} \right] $$

Product of reduction matrices

The product of the sequence of reduction matrices is the inverse: $$ % four \left[ \begin{array}{cr} 1 & -2 \\ 0 & 1 \\ \end{array} \right] % third \left[ \begin{array}{cr} 1 & 0 \\ 0 & -1 \\ \end{array} \right] % second \left[ \begin{array}{rc} 1 & 0 \\ -1 & 1 \\ \end{array} \right] % first \left[ \begin{array}{cc} \frac{1}{2} & 0 \\ 0 & 1 \\ \end{array} \right] = \left[ \begin{array}{rr} -\frac{1}{2} & 2 \\ \frac{1}{2} & -1 \\ \end{array} \right] = \mathbf{A}^{-1} $$


Gauss-Jordan elimination means you find the matrix inverse $A^{-1}$.

Gaussian elimination means you only find the solution to $Ax=b$.

When you have the matrix inverse, of course you can also find the solution $x=A^{-1}b$, but this is more work.