Why does Friedberg say that the role of the determinant is less central than in former times?

I am taking a proof-based introductory course to Linear Algebra as an undergrad student of Mathematics and Computer Science. The author of my textbook (Friedberg's Linear Algebra, 4th Edition) says in the introduction to Chapter 4:

The determinant, which has played a prominent role in the theory of linear algebra, is a special scalar-valued function defined on the set of square matrices. Although it still has a place in the study of linear algebra and its applications, its role is less central than in former times.

He even sets up the chapter in such a way that you can skip going into detail and move on:

For the reader who prefers to treat determinants lightly, Section 4.4 contains the essential properties that are needed in later chapters.

Could anyone offer a didactic and simple explanation that refutes or asserts the author's statement?


Solution 1:

Friedberg is not wrong, at least on a historical standpoint, as I am going to try to show it.

Determinants were discovered "as such" in the second half of the 18th century by Cramer who used them in his celebrated rule for the solution of a linear system (in terms of quotients of determinants). Their spread was rather rapid among mathematicians of the next two generations ; they discovered properties of determinants that now, with our vision, we mostly express in terms of matrices.

Cauchy has given two important results about determinants as explained in the very nice article by Hawkins referenced below :

  • around 1815, Cauchy discovered the multiplication rule (rows times columns) of two determinants. This is typical of a result that has been completely revamped : nowadays, this rule is for the multiplication of matrices, and determinants' multiplication is restated as the homomorphism rule $\det(A \times B)= \det(A)\det(B)$.

  • around 1825, he discovered eigenvalues "associated with a symmetric determinant" and established the important result that these eigenvalues are real ; this discovery has its roots in astronomy, in connection with Sturm, explaining the word "secular values" he attached to them: see for example this.

Matrices made a shy apparition in the mid-19th century (in England) ; "matrix" is a term coined by Sylvester see here. I strongly advise to take a look at his elegant style in his Collected Papers.

Together with his friend Cayley, they can rightly be named the founding fathers of linear algebra, with determinants as permanent reference. Here is a major quote of Sylvester:

"I have in previous papers defined a "Matrix" as a rectangular array of terms, out of which different systems of determinants may be engendered as from the womb of a common parent".

A lot of important polynomials are either generated or advantageously expressed as determinants:

  • the characteristic polynomial (of a matrix) is expressed as the famous $\det(A-\lambda I)$,

  • in particular, the theory of orthogonal polynomials mainly developed at the end of 19th century, can be expressed in great part with determinants,

  • the "resultant" of two polynomials, invented by Sylvester (giving a condition for these polynomials to have a common root), etc.

Let us repeat it : for a mid-19th century mathematician, a square array of numbers has necessarily a value (its determinant): it cannot have any other meaning. If it is a rectangular array, the numbers attached to it are the determinants of submatrices that can be "extracted" from the array.

The identification of "Linear Algebra" as an integral (and new) part of Mathematics is mainly due to the German School (say from 1870 till the 1930's). I don't cite the names, there are too many of them. An example among many others of this german domination: the germenglish word "eigenvalue". The word "kernel" could have remained the german word "kern" that appears around 1900 (see this site).

The triumph of Linear Algebra is rather recent (mid-20th century). "Triumph" meaning that now Linear Algebra has found a very central place. Determinants in all that ? Maybe the biggest blade in this swissknife, but not more ; another invariant (this term would deserve a long paragraph by itself), the trace, would be another blade, not the smallest.

In 19th century, Geometry was still at the heart of mathematical education; therefore, the connection between geometry and determinants has been essential in the development of linear algebra. Some cornerstones:

  • the development of projective geometry, in its analytical form, in the 1850s. This development has led in particular to place homographies at the heart of projective geometry, with their associated matricial expression. Besides, conic curves, described by a quadratic form, can as well be written under an all-matricial expression $X^TMX=0$ where $M$ is a symmetrical $3 \times 3$ matrix. This convergence to a unique and new "algebra" has taken time to be recognized.

A side remark: this kind of reflexions has been capital in the decision of Bourbaki team to avoid all figures and adopt the extreme view of reducing geometry to linear algebra (see the "Down with Euclid" of J. Dieudonné in the sixties).

Different examples of the emergence of new trends :

a) the concept of rank: for example, a pair of straight lines is a conic section whose matrix has rank 1. The "rank" of a matrix used to be defined in an indirect way as the "dimension of the largest nonzero determinant that can be extracted from the matrix". Nowadays, the rank is defined in a more straightforward way as the dimension of the range space... at the cost of a little more abstraction.

b) the concept of linear transformations and duality arising from geometry: $X=(x,y,t)^T\rightarrow U=MX=(u,v,w)$ between points $(x,y)$ and straight lines with equations $ux+vy+w=0$. More precisely, the tangential description, i.e., the constraint on the coefficients $U^T=(u,v,w)$ of the tangent lines to the conical curve has been recognized as associated with $M^{-1}$ (assuming $\det(M) \neq 0$!), due to relationship

$$X^TMX=X^TMM^{-1}MX=(MX)^T(M^{-1})(MX)=U^TM^{-1}U=0$$ $$=\begin{pmatrix}u&v&w\end{pmatrix}\begin{pmatrix}A & B & D \\ B & C & E \\ D & E & F \end{pmatrix}\begin{pmatrix}u \\ v \\ w \end{pmatrix}=0$$

whereas, in 19th century, it was usual to write the previous quadratic form as :

$$\det \begin{pmatrix}M^{-1}&U\\U^T&0\end{pmatrix}=\begin{vmatrix}a&b&d&u\\b&c&e&v\\d&e&f&w\\u&v&w&0\end{vmatrix}=0$$

as the determinant of a matrix obtained by "bordering" $M^{-1}$ precisely by $U$

(see the excellent lecture notes (http://www.maths.gla.ac.uk/wws/cabripages/conics/conics0.html)). It is to be said that the idea of linear transformations, especially orthogonal transformations, arose even earlier in the framework of the theory of numbers (quadratic representations).

Remark: the way the former identities have been written use matrix algebra notations and rules that were unknown in the 19th century, with the notable exception of Grassmann's "Ausdehnungslehre", whose ideas were too ahead of his time (1844) to have a real influence.

c) the concept of eigenvector/eigenvalue, initially motivated by the determination of "principal axes" of conics and quadrics.

  • the very idea of "geometric transformation" (more or less born with Klein circa 1870) associated with an array of numbers (when linear or projective). A matrix, of course, is much more that an array of numbers... But think for example to the persistence of expression "table of direction cosines" (instead of "orthogonal matrix") as can be found for example still in the 2002 edition of Analytical Mechanics by A.I. Lorrie.

d) The concept of "companion matrix" of a polynomial $P$, that could be considered as a tool but is more fundamental than that (https://en.wikipedia.org/wiki/Companion_matrix). It can be presented and "justified" as a "nice determinant" : In fact, it has much more to say, with the natural interpretation for example in the framework of $\mathbb{F}_p[X]$ (polynomials with coefficients in a finite field) as the matrix of multiplication by $P(X)$. (https://glassnotes.github.io/OliviaDiMatteo_FiniteFieldsPrimer.pdf), giving rise to matrix representations of such fields. Another remarkable application of companion matrices : the main numerical method for obtaining the roots of a polynomial is by computing the eigenvalues of its companion matrix using a Francis "QR" iteration (see (https://math.stackexchange.com/q/68433)).

References:

I discovered recently a rather similar question with a very complete answer by Denis Serre, a specialist in the domain of matrices : https://mathoverflow.net/q/35988/88984

The article by Thomas Hawkins : "Cauchy and the spectral theory of matrices", Historia Mathematica 2, 1975, 1-29.

See also (http://www.mathunion.org/ICM/ICM1974.2/Main/icm1974.2.0561.0570.ocr.pdf)

An important bibliography is to be found in (http://www-groups.dcs.st-and.ac.uk/history/HistTopics/References/Matrices_and_determinants.html).

See also a good paper by Nicholas Higham : (http://eprints.ma.man.ac.uk/954/01/cay_syl_07.pdf)

For conic sections and projective geometry, see a) this excellent chapter of lectures of the University of Vienna (see the other chapters as well) : (https://www-m10.ma.tum.de/foswiki/pub/Lehre/WS0809/GeometrieKalkueleWS0809/ch10.pdf). See as well : (maths.gla.ac.uk/wws/cabripages/conics/conics0.html).

Don't miss the following very interesting paper about various kinds of useful determinants : https://arxiv.org/pdf/math/9902004.pdf

See also this

Very interesting precisions on determinants in this text and in these answers.

A fundamental work on "The Theory of Determinants" in 4 volumes has been written by Thomas Muir : http://igm.univ-mlv.fr/~al/Classiques/Muir/History_5/VOLUME5_TEXT.PDF (years 1906, 1911, 1922, 1923) for the last volumes or, for all of them https://ia800201.us.archive.org/17/items/theoryofdetermin01muiruoft/theoryofdetermin01muiruoft.pdf. It is very interesting to take random pages and see how the determinant-mania has been important, especially in the second half of the 19th century. Matrices appear at some places with the double bar convention that lasted a very long time. Matrices are mentionned here and there, rarely to their advantage...

Many historical details about determinants and matrices can be found here.

Solution 2:

It depends who you speak to.

  • In numerical mathematics, where people actually have to compute things on a computer, it is largely recognized that determinants are useless. Indeed, in order to compute determinants, either you use the Laplace recursive rule ("violence on minors"), which costs $O(n!)$ and is infeasible already for very small values of $n$, or you go through a triangular decomposition (Gaussian elimination), which by itself already tells you everything you needed to know in the first place. Moreover, for most reasonably-sized matrices containing floating-point numbers, determinants overflow or underflow (try $\det \frac{1}{10} I_{350\times 350}$, for instance). To put another nail on the coffin, computing eigenvalues by finding the roots of $\det(A-xI)$ is hopelessly unstable. In short: in numerical computing, whatever you want to do with determinants, there is a better way to do it without using them.
  • In pure mathematics, where people are perfectly fine knowing that an explicit formula exists, all the examples are $3\times 3$ anyway and people make computations by hand, determinants are invaluable. If one uses Gaussian elimination instead, all those divisions complicate computations horribly: one needs to take different paths whether things are zero or not, so when computing symbolically one gets lost in a myriad of cases. The great thing about determinants is that they give you an explicit polynomial formula to tell when a matrix is invertible or not: this is extremely useful in proofs, and allows for lots of elegant arguments. For instance, try proving this fact without determinants: given $A,B\in\mathbb{R}^{n\times n}$, if $A+Bx$ is singular for $n+1$ distinct real values of $x$, then it is singular for all values of $x$. This is the kind of things you need in proofs, and determinants are a priceless tool. Who cares if the explicit formula has a exponential number of terms: they have a very nice structure, with lots of neat combinatorial interpretations.

Solution 3:

Determinants are still very much relevant to abstract algebra. In applied mathematics, they are less so, though the claim that "determinants are impractical because they take too long to compute" is misguided (no one forces you to compute them by the Leibniz formula; Gaussian elimination works in $O\left(n^3\right)$ time, and there is an $O\left(n^4\right)$ division-free algorithm as well). (Another oft-repeated assertion is that Cramer's rule is not very useful for solving actual systems of linear equations; I believe this one is correct, but I am mostly seeing Cramer's rule used as a theoretical tool in proofs as opposed to computational applications.)

What is going on is the following: Back up to the early 20th century, determinants used to be one of the few tools available for linear-algebraic problems. (They might even be one of the oldest tools, discovered by Takakazu Seki back in 1683.) Other linear-algebraic tools started appearing in the 18th and 19th centuries, but their development had always been lagging behind that of determinants until the likes of Noether and Bourbaki came around in the 20th century. (Muir's 5-volume annals of determinant theory, which can be downloaded from Alain Lascoux's website, contain an impressive collection of results, many of them deep, about determinants.) Thus, for a long time, the only way to a deep result would pass through the land of determinants, simply because other lands were barely explored. Only after the notions of vector spaces, modules, tensors, exterior powers etc. went mainstream (1950s?), mathematicians could afford avoiding determinants, and they started noticing that it would often be easier to do so, and with the benefit of hindsight, some of the older uses of determinants were just detours. At some point, avoiding determinants became something like a cultural fashion, and Axler took it to an extreme in his LADR textbook, emphasizing non-constructive methods and leaving students rather ill-prepared for research in abstract algebra. Nevertheless, Axler's approach has some strengths (e.g., his nice and slick determinant-free proof of the existence of eigenvectors has become standard now, and is included even in Treil's LADW book, whose title is a quip on Axler's), which once again illustrates what I think is the correct takeaway from the whole story: Determinants used to be treated as a panacea, for lack of other tools of comparable strength; but now that the rest of linear algebra has caught up, they have retreated to the grounds where they belong, which is still a wide swath of the mathematical landscape (many parts of abstract algebra and algebraic combinatorics have determinants written into their DNA, rather than using them as a tool; they are not very likely to shed them off).