Is reducing a matrix to row echelon form useful at all?

I have just started auditing Linear Algebra, and the first thing we learned was how to solve a system of linear equations by reducing it to row echelon form (or reduced row echelon form). I know how to solve a system of linear equations using determinants, and so the using row echelon form seems to be very inefficient and an easy way to make mistakes.

The lecturer seemed to indicate that this will be useful in the future for something else, but a friend who has a PhD in mathematics said it is totally useless.

Is this true? Is this technique totally useless or is there some "higher mathematics" field/ technique where row echelon form will be useful?


Solution 1:

Go beyond 2 x 2 and row echelon form is the way to solve systems of linear equations. This stuff is not done by hand, but by a computer. Row echelon form is much, much faster than using determinants (Cramer's formula). Ask someone who works in numerical analysis about this. Your friend with the Ph.D. is clearly unaware of the needs of applied math.

In pure math, Cramer's formula is very important for some proofs (and I personally prefer it for conceptual reasons over the algorithmic row echelon business, but then I don't work in numerical analysis). In applied math problems that give rise to systems of linear equations, you typically face equations in hundreds of variables. You are not going to solve that with a determinant. Moreover, if a computer needed to find a determinant of a 100 x 100 matrix it would not use the sum of 100! terms as in the Laplace expansion formula for determinants, but instead apply faster algorithms (the QR algorithm).

Your profile says you are studying economics. At some point when you may need to deal with some system of linear equations in economics and have the computer solve it for you, the method of solution could be treated as a black box in the same way most people drive without knowing how a car really works. But the reality is that if it were not for the algorithm of reducing to row echelon form, the computer couldn't solve systems of hundreds of equations rapidly.

Solution 2:

Gaussian elimination on an $n \times n$ matrix takes takes on the order of $n^3$ operations, where an operation is an addition, subtraction, multiplication, or a division. By contrast, computing determinants using the cofactor expansion takes $n!$ operations, which is much, much longer.

For example, consider a $100 \times 100$ matrix. Then $100^3$ is a million, and a million operations can be done by a typical laptop in under a second; but $100!$ is approximately $1$ followed by $158$ zeros, which would take even a supercomputer more time than the age of the universe.

In fact, I believe the opposite of your claim is true: if you want to compute determinants, the best way is to to reduce to echelon form.

Solution 3:

alexx already covered much of what I had wanted to say, so allow me to share something that had been written by Forman Acton in his (excellent!) Numerical Methods that Work on the matter:

…the fact that this sequence of operations takes (much more) labor on the part of the computer as a direct solution of the problem by (Gaussian elimination) has never penetrated (the student's) consciousness. The glibness of the formal mathematical notation has obscured the realities of the arithmetic process. The cure is simple enough—the student should be asked to solve a 4-by-4 system, by hand, both ways. But this sort of realism is currently unfashionable—arithmetic labor being deemed only suitable for machines.