Is there a way to know if a row reduction of a matrix has been done correctly?
I'm an undergrad taking the class of "Linear algebra 1". I came across a problem: sometimes we need to apply Gaussian elimination for matrices. Very quickly this skill is not much necessary as it's not a thinking skill but purely Technic.
Yet, often in exams there's a question that requires you to apply row reduction to a matrix.
I am looking for a way to know if the Gaussian elimination has been done properly, meaning - with no Calculation mistakes, other than going over all the steps and check that the Calculation has been done correctly. as this processes will double the time I will spend on a given question, and due to the lack of time in a big course exam - which is also very stressful - such method could be much helpful for me.
note: we're allowed to use a simple scientific calculator (not a graph calculator)
We know that elementary row operations do not change the row space of the matrix. And if a matrix is in rref, then it is relatively easy to check whether a vector belongs to the row space.
So suppose you have matrix $A$ and a reduced row echelon matrix $B$. If $R_A$ and $R_B$ are row spaces, you can easily check whether $R_A\subseteq R_B$. Of cause, this is only "half"1 of the verification whether $R_A=R_B$, which is equivalent to $A\sim B$.
Example. Suppose that I have a matrix $$A= \begin{pmatrix} 1 & 1 & 1 & 2 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 1 & 2 & 1 & 1 \\ \end{pmatrix}.$$
And that after Gaussian elimination I get: $$B= \begin{pmatrix} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 &-1 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \\ \end{pmatrix} $$
To check whether $R_A\subseteq R_B$ it suffices to check whether each row of $A$ is a linear combination of $(1,0,0,1)$, $(0,1,0,-1)$ and $(0,0,1,2)$ , i.e., whether it is of the form $c_1(1,0,0,1)+c_2(0,1,0,-1)+c_3(0,0,1,2)$. But since these vectors are very simple, we can see that on coordinates where there are pivots we get $c_1$, $c_2$ and $c_3$. So it is easy to find coefficients.
Let us try with the fourth row: $(1,2,1,1)$. We look at the first three coordinates. (Those are the coordinates with the pivots.) And we check whether $$(\boxed{1},\boxed{2},\boxed{1},1)= 1\cdot(1,0,0,1)+2\cdot(0,1,0,-1)+1\cdot(0,0,1,2) $$ We see that this is true. If the same thing works for each row of $A$, this shows that $R_A\subseteq R_B$.
Let me try now another example where I make a mistake on purpose to see how to find the mistake. $$\begin{pmatrix} 1 & 1 & 1 & 2 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 1 & 2 & 1 & 1 \\ \end{pmatrix}\overset{(1)}\sim \begin{pmatrix} 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 1 & 2 & 1 & 1 \\ \end{pmatrix}\overset{(2)}\sim \begin{pmatrix} 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 2 & 0 & 0 \\ \end{pmatrix}\overset{(3)}\sim \begin{pmatrix} 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{pmatrix}\overset{(4)}\sim \begin{pmatrix} 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{pmatrix}\overset{(5)}\sim \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 \\ \end{pmatrix} $$
We can check that $$(1,1,1,2)\ne 1\cdot(1,0,0,0)+1\cdot(0,1,0,0)+1\cdot(0,0,1,1).$$
I can even make the same verification for the matrix after each step. For example, for the matrix after step $(2)$, i.e., $\begin{pmatrix} 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 2 & 0 & 0 \\ \end{pmatrix}$, everything works. So some error must be before this step.
I will stress once again that this is only halfway verification. I have only checked $R_A\subseteq R_B$ but not $R_B\subseteq R_A$.
So it is possible that I make a mistake which I do not notice in this way. Here is a (rather naive) example
$$\begin{pmatrix} 1 & 1 & 1 & 2 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 1 & 2 & 1 & 1 \\ \end{pmatrix}\sim \begin{pmatrix} 1 & 1 & 1 & 2 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 &-1 \\ \end{pmatrix}\sim \begin{pmatrix} 1 & 1 & 1 & 2 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 1 \\ \end{pmatrix}\sim \begin{pmatrix} 1 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{pmatrix}\sim \begin{pmatrix} 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{pmatrix}\sim \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{pmatrix} $$
The sanity check described above works. (We check that $R_A\subseteq R_B$, which is true.) But the result is incorrect.
If I want to be able to check both inclusions and additionally to be able to make a check after each step, I can use extended matrix. (But this is much more work.)
In our example I would do the following $$ \left(\begin{array}{cccc|cccc} 1 & 1 & 1 & 2 & 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 1 & 2 & 1 & 1 & 0 & 0 & 0 & 1 \\ \end{array}\right)\sim \left(\begin{array}{cccc|cccc} 0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 &-1 & 1 \\ \end{array}\right)\sim \left(\begin{array}{cccc|cccc} 0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\ \end{array}\right)\sim \left(\begin{array}{cccc|cccc} 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\ \end{array}\right)\sim \left(\begin{array}{cccc|cccc} 1 & 0 &-1 &-1 & 0 & 1 &-1 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\ \end{array}\right)\sim \left(\begin{array}{cccc|cccc} 1 & 0 & 0 & 1 & 1 & 0 &-1 & 0 \\ 0 & 1 & 0 &-1 &-1 & 1 & 1 & 0 \\ 0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\ \end{array}\right) $$ Now the four numbers on the right are coefficients which tell me how to get this row as a linear combination of the linear matrix. For example, if I look at the first row, I can check that $$1\cdot(1,1,1,2)-1\cdot(0,1,1,1)=(1,0,0,1).$$ By making a similar verification for each I can test that $R_A\subseteq R_B$.
Notice that I can do this also halfway through the computation. For example, if I look at the last row of the third matrix, I have there $$\left(\begin{array}{cccc|cccc} 0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\ \end{array}\right)$$ And I can check that $$-1\cdot(1,1,0,0)-1\cdot(0,1,1,1)+1\cdot(1,2,1,1)=(0,0,0,0).$$
1 This is similar to the advice given in comment. If you are using Gaussian elimination to solve a linear system, you can check whether the solution you got is indeed a solution. But it is still possible that you do not have all solutions. So this is just a "half-check".
The actual algorithm for Gaussian Elimination looks like this.
I answered a similar answer showing how to perform the LU decomposition which is Gaussian Elimination without pivoting The purpose is to zero out the row beneath is with $ \ell_{jk}$. That is why you have the operation below $ u_{j,k:m} = u_{j,k:m} - \ell_{jk} u_{k,k:m}$ It is subtracting off the ratio you just computed.
Which yielded this.
Suppose that
$$ A = \begin{bmatrix} 1 & 1 & 1 \\ 3 & 5 & 6 \\ -2 & 2 & 7 \end{bmatrix} $$ $$ A = LU $$
$$ U =A, L=I$$ $$ k=1,m=3,j=2$$ $$\ell_{21} = \frac{u_{21}}{u_{11}} = \frac{a_{21}}{a_{11}} = 3 $$ $$ u_{2,1:3} = u_{2,1:3} - 3 \cdot u_{1,1:3} $$ Then we're going to subtract 3 times the 1st row from the 2nd row $$ \begin{bmatrix} 3 & 5 & 6 \end{bmatrix} - 3 \cdot \begin{bmatrix} 1 & 1 & 1 \end{bmatrix} = \begin{bmatrix} 0 & 2 & 3\end{bmatrix} $$ Updating each of them $$U = \begin{bmatrix} 1 & 1 & 1 \\ 0 & 2 & 3 \\ -2 & 2 & 7 \end{bmatrix} $$ $$ L =\begin{bmatrix} 1 & 0 & 0 \\ 3 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} $$ $$k=1,j=3,m=3 $$ $$\ell_{31} = \frac{u_{31}}{u_{11}} = \frac{-2}{1} = -2 $$ $$ u_{3,1:3} = u_{3,1:3} +2 \cdot u_{1,1:3} $$ Then we add two times the first row to the third row $$ \begin{bmatrix} -2 & 2 & 7 \end{bmatrix} + 2 \cdot \begin{bmatrix} 1 & 1& 1 \end{bmatrix} = \begin{bmatrix}0 & 4 & 9 \end{bmatrix} $$ Updating $$ U = \begin{bmatrix} 1 & 1 & 1 \\ 0 & 2 & 3 \\ 0 & 4 & 9 \end{bmatrix} $$ $$ L =\begin{bmatrix} 1 & 0 & 0 \\ 3 & 1 & 0 \\ -2 & 0 & 1 \end{bmatrix} $$ $$ k=2, j=3,m=3 $$ $$ \ell_{32} = \frac{u_{32}}{u_{22}} = \frac{4}{2} = 2$$ We're subtracting out little blocks $$ u_{3,2:3} = u_{3,2:3} - 2 \cdot u_{2,2:3} $$ $$ \begin{bmatrix} 4 & 9 \end{bmatrix} - 2 \cdot\begin{bmatrix} 2& 3 \end{bmatrix} = \begin{bmatrix} 0 & 3 \end{bmatrix} $$ Updating $$ U = \begin{bmatrix} 1 & 1 & 1 \\ 0 & 2 & 3 \\ 0 & 0 & 3 \end{bmatrix} $$ $$ L =\begin{bmatrix} 1 & 0 & 0 \\ 3 & 1 & 0 \\ -2 & 2 & 1 \end{bmatrix} $$ It now terminates $$ A = LU $$ $$ \underbrace{\begin{bmatrix} 1 & 1 & 1 \\ 3 & 5 & 6 \\ -2 & 2 & 7 \end{bmatrix}}_{A} = \underbrace{\begin{bmatrix} 1 & 0 & 0 \\ 3 & 1 & 0 \\ -2 & 2 & 1 \end{bmatrix}}_{L} \underbrace{\begin{bmatrix} 1 & 1 & 1 \\ 0 & 2 & 3 \\ 0 & 0 & 3 \end{bmatrix}}_{U} $$