Why is determinant called determinant?

Can someone explain to me why do we call the determinant of a matrix "determinant"? Does it have any meaning? Like it determines something for example!


Here is some information about the origin of the term determinant. This term was introduced the first time $1801$ by C.F. Gauss in his Disquisitiones arithmeticae, XV, p. 2 in connection with a form of second degree.

  • The following is from The Theory of Determinants in the historical order of development (1905) by Thomas Muir.

    [Muir, p. 64]: Gauss writes the form as \begin{align*} axx+2bxy+cyy \end{align*} and for shortness speaks of it as the form $(a,b,c)$.

    The function of the coefficients $a,b,c$, which was found by Lagrange to be of notable importance in the discussion of the form, Gauss calls the determinant of the form, the exact words being

  • [Gauss, 1801] Numerum $bb-ac$, a cuius indole preprietates formae $(a,b,c)$ imprimis pendere in sequentibus decebimus, determinantem huius formae uocabimus.

and Muir continues:

  • [Muir, p.64] ... Here then we have the first use of the term which with an extended signification has in our day come to be so familiar. It must be carefully noted that the more general functions, to which the name came afterwards to be given, also repeatedly occur in the course of Gauss' work, ...

Besides the historic reasons, which are covered in previous answers, here's another take at why we would say the determinant "determines" something. This is clearly not the origin of the word, I think, but it gives you another curious answer to the question.

I believe that an interesting way of looking at it is beginning with alternating multilinear forms, which are mappings such $t: V \times \overset{n}{\dots}\times V \to \mathbb{R}$ (where $V$ is a vector space and $\mathbb{R}$ can be substituted with any other scalar field) which are linear in every entry and evaluate to $0$ if two or more entries are equal eg. Suppose $V = \mathbb{R}$, then $t(1,2, \dots, n-1, 2) = 0$ (that is what 'alternating' means). There are plenty of places where you can read about multilinear mappings: see Wikipedia for example, read it first and then come back, I will focus on the fact that interests us.

Suppose that you have a basis of $V$, say $\lbrace \vec{u}_1, \dots, \vec{u}_n \rbrace$. Now take any family of vectors you like in the input space $\lbrace \vec{x}_1, \dots, \vec{x}_n \rbrace$ and note that those vectors can be expressed in terms of the basis we chose

\begin{equation} (\vec{x}_1, \dots, \vec{x}_n) = (\vec{u}_1, \dots, \vec{u}_n) \begin{pmatrix} a_1^1 & a_2^1 & \dots & a_n^1 \\ a_1^2 & a_2^2 & \dots & a_n^2 \\ \vdots & \vdots & \vdots &\vdots \\ a_1^n & a_2^n & \dots & a_n^n \end{pmatrix} \end{equation}

Now we are ready to expand $t(\vec{x}_1, \dots, \vec{x}_n)$ using the mutilinearity of $t$:

\begin{align*} &t(\vec{x}_1, \dots, \vec{x}_n) = t(\sum_{i_1=1}^n a^{i_1}_1 \vec{u}_{i_1}, \dots, \sum_{i_n=1}^n a^{i_n}_n \vec{u}_{i_n}) =\\ &= \sum_{i_1=1}^n a^{i_1}_1 \dots \sum_{i_n=1}^n a^{i_n}_n\cdot t(\vec{u}_{i_1}, \dots,\vec{u}_{i_n}) = \sum_{i_1,\dots, i_n=1}^n a^{i_1}_1 \dots a^{i_n}_n \cdot t(\vec{u}_{i_1}, \dots,\vec{u}_{i_n}) \end{align*}

In this sum we do not put any restrictions to the indexes and so there will be terms where we will find repeated $i_j$, and then that term is equal to $0$ because $t$ is alternating eg. $t(\vec{u}_{i_1}, \dots,\vec{u}_{i_j}, \dots, \vec{u}_{i_j}, \dots,\vec{u}_{i_n}) = 0$. Therefore, the only terms in this sum which are not zero are those such $(i_1, \dots, i_n)$ are all different; in other words, the indexes are a permutation of $(1,\dots,n)$.

\begin{align*} t(\vec{x}_1, \dots, \vec{x}_n) &= \sum_{\sigma \in \mathcal{S}_n} a^{\sigma(1)}_1 \dots a^{\sigma(n)}_n \cdot t(\vec{u}_{\sigma(1)}, \dots,\vec{u}_{\sigma(n)}) = \\ &= \sum_{\sigma \in \mathcal{S}_n} a^{\sigma(1)}_1 \dots a^{\sigma(n)}_n \cdot \text{sig}(\sigma) \cdot t(\vec{u}_1, \dots,\vec{u}_n) \end{align*}

Finally, we see that the way $t$ acts upon any arbitrary family of vectors has a part which will be common to every family of vectors ($t$ acting on the basis), and a differentiating part which will be particular to that set of vectors (because their coordinates with respect to that basis identify them uniquely). We call that differentiating part the determinant of that set of vectors, because indeed is what determines the value of $t(\vec{x}_1, \dots, \vec{x}_n)$.

\begin{equation}\operatorname{det}_{\lbrace \vec{u}_i \rbrace}(\vec{x}_1, \dots, \vec{x}_n) = \sum_{\sigma \in \mathcal{S}_n} a^{\sigma(1)}_1 \dots a^{\sigma(n)}_n \cdot \operatorname{sig}(\sigma) \end{equation}

You see that the definition of the determinant of a matrix is exactly the same. Indeed it can be seen as the determinant of the vectors that form the columns of the matrix. The final part of this reasoning would be to show that there is a single multilinear form $d$ that verifies $d(\vec{u}_1, \dots, \vec{u}_n) = 1$ for a given basis, and thus $d(\vec{x}_1, \dots, \vec{x}_n) = \operatorname{det}_{\lbrace \vec{u}_i \rbrace}(\vec{x}_1, \dots, \vec{x}_n)$, and so determinant is itself an alternating multilinear form. Then we get all the nice properties of multilinear forms for our concept of determinant.