Does the unique map on the zero space have determinant 1?

@AlexRavsky argues in their answer that the determinant of the unique linear operator on the zero vector space is undefined, because the zero vector space has no basis.

I consider the convention that an empty sum equals $0$ (and that an empty product equals $1$) to be very natural. So, for me, it is a consequence of the definition of a basis that the zero vector space over any field has a (unique) basis, namely the empty set.

So, for me, it makes sense to talk about the determinant of the unique linear operator on the zero vector space, and I argue below that its determinant exists and equals $1$, regardless of which definition of determinant we choose.


Definition 1. The determinant of $T \colon V \to V$ is the determinant of the $n \times n$ matrix of $T$ with respect to any ordered basis of $V$.

Let $\mathbb{F}$ be a field and $V$ be the zero vector space over $\mathbb{F}$, so that $V$ has dimension $0$ over $\mathbb{F}$ and the empty set is the unique basis of $V$ over $\mathbb{F}$.

Question: Is the empty set an ordered basis for $V$?
Answer: Yes, the unique relation on $\emptyset$ also happens to be a total ordering.

Question: What is the matrix of $T\colon V \to V$ with respect to the ordered basis $\emptyset$?
Answer: It is the empty matrix, the unique element of the space $\mathbb{F}^{0 \times 0}$.

Explanation: Note that $\mathbb{F}^{m \times n}$ denotes the set of all $m \times n$ matrices, and is defined as the set of all functions $\{ (i,j) : 1 \leq i \leq m, 1 \leq j \leq n\} \to \mathbb{F}$. When $m = 0 = n$, the domain is $\emptyset$, and there is a unique function $\emptyset \to \mathbb{F}$, namely the empty function. This is what we call the empty matrix.

So, the empty matrix is the unique candidate for being the matrix of $T$ with respect to the ordered basis $\emptyset$, but is it actually the matrix of $T$?

For the matrix $A \in \mathbb{F}^{n \times n}$ to be the matrix of the linear transformation $T \colon V \to V$ with respect to the ordered basis $\mathcal{B} = (v_1,\dotsc,v_n)$, we should have that, for every $1 \leq i \leq n$, its $i$th column is the coordinate vector of $v_i$ with respect to $\mathcal{B}$. When $n = 0$, this is vacuously true, so the empty matrix is the matrix of $T$ with respect to the ordered basis $\emptyset$.

Question: Does $\mathbb{F}^{0 \times 0}$ contain the identity matrix?
Answer: Yes, the empty matrix is the identity matrix in $\mathbb{F}^{0 \times 0}$.

Explanation: The identity matrix $I_n \in \mathbb{F}^{n \times n}$ is defined by: for every $1 \leq i, j \leq n$, $(I_n)_{i,j} = 1$ if $i = j$ and $0$ if $i \neq j$. These conditions are vacuously satisfied when $n = 0$. So, every element of $\mathbb{F}^{0 \times 0}$ is the identity matrix, or said less dramatically, the unique element of $\mathbb{F}^{0 \times 0}$ is the identity matrix.

Note that the unique element of $\mathbb{F}^{0 \times 0}$ is also the zero matrix for the same reason. In fact, it is a scalar matrix for any choice of scalar from $\mathbb{F}$, for the same reason.

Question: Is there a determinant function on $\mathbb{F}^{0 \times 0}$?
Answer: Yes, the function $\mathbb{F}^{0 \times 0} \to \mathbb{F}$ that maps the empty matrix to $1$ is a determinant function.

Explanation: For a function $f \colon \mathbb{F}^{n \times n} \to \mathbb{F}$ to be a determinant function, it should be a multilinear, alternating function of the rows (or columns, it makes no difference), such that the identity matrix $I_n$ is mapped to $1$.

We have shown that the empty matrix is also the identity matrix. There is a unique function $\mathbb{F}^{0 \times 0} \to \mathbb{F}$ that maps the empty matrix to $1$, since $\mathbb{F}^{0 \times 0}$ is a singleton set. We will show that this function is also a multilinear and alternating function on the rows, so it will turn out to be the unique determinant function on $\mathbb{F}^{0 \times 0}$.

A function $f \colon \mathbb{F}^{n \times n} \to \mathbb{F}$ is a multilinear function of the rows if, for every $1 \leq i \leq n$ and every choice of vectors $v_1,\dotsc,v_{i-1},v_{i+1},\dotsc,v_n \in \mathbb{F}^{n}$, the function $g \colon \mathbb{F}^n \to \mathbb{F}$ defined by $g(v) = f(v_1,\dotsc,v_{i-1},v,v_{i+1},\dotsc,v_n)$ for all $v \in \mathbb{F}^n$ is linear. When $n = 0$, this condition is vacuously true. So, every function $\mathbb{F}^{0 \times 0} \to \mathbb{F}$ is a multilinear function of the rows.

A function $f \colon \mathbb{F}^{n \times n} \to \mathbb{F}$ is an alternating function of the rows if, for every $1 \leq i < j \leq n$, whenever $v_1,\dotsc,v_n$ are vectors in $\mathbb{F}^n$ with $v_i = v_j$, we have $f(v_1,\dotsc,v_n) = 0$. Again, when $n = 0$, this condition is vacuously true. So, every function $\mathbb{F}^{0 \times 0} \to \mathbb{F}$ is an alternating function of the rows.

Hence, the function that maps the empty matrix to $1$ is a determinant function.

Hence, $\det(T)$ exists and equals $1$.


Next, let us see what happens with the basis-free definition of the determinant of a linear operator.

Definition 2. The determinant of $T\colon V \to V$ is the unique scalar $c \in \mathbb{F}$ such that the induced linear operator $\Lambda^n(T) \colon \Lambda^n(V) \to \Lambda^n(V)$ satisfies $\Lambda^n(T)(v_1 \wedge \dotsb \wedge v_n) = c \cdot (v_1 \wedge \dotsb \wedge v_n)$ for all $v_1, \dotsc, v_n \in V$.

Again, let $\mathbb{F}$ be a field and $V$ be the zero vector space over $\mathbb{F}$, so that $V$ has dimension $0$ over $\mathbb{F}$ and the empty set is the unique basis of $V$ over $\mathbb{F}$.

Question: What is the $0$th exterior power of $V$, $\Lambda^0(V)$?
Answer: $\Lambda^0(V) = \mathbb{F}$.

Explanation: The $k$th exterior power of $V$, $\Lambda^k(V)$ is the vector supspace of the exterior algebra $\Lambda(V)$ generated by the set of all vectors in $\Lambda(V)$ of the form $v_1 \wedge \dotsb \wedge v_k$, where $v_1,\dotsc,v_k \in V$.

Hence, $\Lambda^0(V)$ is the vector subspace spanned by the empty wedge product. By convention, the empty wedge product is equal to $1$ in $\Lambda(V)$, which is just the scalar $1 \in \mathbb{F}$ by the canonical embedding of $\mathbb{F}$ into $\Lambda(V)$. The vector subspace of $\Lambda(V)$ generated by $1$ is just $\mathbb{F}$. Hence, $\Lambda^0(V) = \mathbb{F}$. Note that this is actually true regardless of whether $V$ is the zero vector space or not.

Question: What linear operator does $T$ induce on $\Lambda^0(V)$?
Answer: If $f \colon V \to V$ is a linear operator on the $n$-dimensional vector space $V$ over $\mathbb{F}$, then $f$ induces the operator $\Lambda^n(f) \colon \Lambda^n(V) \to \Lambda^n(V)$ defined by $\Lambda^n(f)(v_1 \wedge \dotsb \wedge v_n) = f(v_1) \wedge \dotsb \wedge f(v_n)$, for all $v_1,\dotsc,v_n \in V$.

When $n = 0$, the induced operator $\Lambda^0(T)$ is defined by the single relation $\Lambda^0(T)(1) = 1$, since every $n$-fold wedge product appearing in the definition of $\Lambda^n(T)$ becomes the empty wedge product equal to $1$.

Question: Does the determinant of $T \colon V \to V$ exist?
Answer: Yes!

Explanation: When $n = 0$, we have $\Lambda^0(T)(1) = 1 = 1 \cdot 1$. In case that's completely opaque, try the following: $\Lambda^0(T)(\wedge_{i=1}^0 v_i) = 1 \cdot \wedge_{i=1}^0 v_i$ for all $v_i \in V$ with $1 \leq i \leq 0$.

Hence, $\det(T)$ exists and equals $1$.


One can also see what happens when trying to extend the formula for the determinant of an $n \times n$ matrix over $\mathbb{F}$ to the case $n = 0$.

Definition 3. The determinant of the $n \times n$ matrix $A = (a_{ij})$ over $\mathbb{F}$ is given by $$\det(A) = \sum_{\sigma \in \mathfrak{S}_n} \operatorname{sgn}(\sigma) \prod_{i=1}^n a_{i\sigma(i)}.$$

Question: What is $\mathfrak{S}_0$?
Answer: $\mathfrak{S}_0$ is the trivial group.

Explanation: When $n = 0$, $\mathfrak{S}_0$ is the set of bijections $\emptyset \to \emptyset$. Since there is only one such map, namely the empty map, there is only one permutation in $\mathfrak{S}_0$.

Question: What is the sign of the empty permutation?
Answer: $1$.

Explanation: There are several ways to define the sign of a permutation, but in any definition it must be a group homomorphism from $\mathfrak{S}_n$ to $\{ \pm 1 \} = \mathbb{Z}^\times$. Since $\mathfrak{S}_0$ is a singleton set, the sign of the empty permutation must be $1$.

Question: What is the determinant of the empty matrix?
Answer: $1$.

Explanation: Let $n = 0$ and $A$ be the empty matrix. Then, in the definition of $\det(A)$ above, the $n$-fold product inside the summation is the empty product, which equals $1$ by convention. So, we have $\det(A) = \sum_{\sigma \in \mathfrak{S}_0} \operatorname{sgn}(\sigma) \cdot 1 = 1 \cdot 1 = 1$.

Hence, the determinant of the unique $0 \times 0$ matrix exists and equals $1$.

This part was also pointed out in the comments by @Crostul.


There is yet another formula that defines the determinant of an $n \times n$ matrix over $\mathbb{F}$ recursively. We can try to extend this to $n = 0$, too.

Definition 4. Define $\det((a)) = a$ for every matrix $(a) \in \mathbb{F}^{1 \times 1}$. Let $n > 1$, and $A \in \mathbb{F}^{n \times n}$. Denote by $A[i|j]$ the $(n-1) \times (n-1)$ matrix obtained by deleting the $i$th row and $j$th column of $A$. Then, $$\det(A) = \sum_{i=1}^n (-1)^{i+j} A_{ij} \det(A[i|j]).$$

How would we extend the inductive step to $n \geq 1$? We start by noting that if $n = 1$, and $A \in \mathbb{F}^{1 \times 1}$, then there is only one possible value of $i$ and $j$, namely $1$. Moreover, $A[1|1]$ is the empty matrix for every $A \in \mathbb{F}^{1 \times 1}$.

Let $A = (a) \in \mathbb{F}^{1 \times 1}$ and let $E \in \mathbb{F}^{0 \times 0}$ be the empty matrix. If the formula were to be true when $n = 1$ as well, then it would read $$\det(A) = \sum_{i=1}^1 (-1)^{1+1} A_{11} \det(A[1|1]), \quad \text{i.e.} \quad a = a \det(E).$$ Since we want this to be true for all $a \in \mathbb{F}$, and $\mathbb{F}$ has at least one nonzero element, namely $1$, we see that we are forced to define $\det(E) = 1$.

Thus, the only extension of Definition 4 that works for all $n \geq 1$ is with the initialization $\det(E) = 1$, where $E$ is the empty matrix.


To the best of my knowledge, standard linear algebra textbooks do not concern themselves with the case $n = 0$ when discussing determinants.

I checked Linear Algebra by Hoffman and Kunze, and they start by defining determinants of $n \times n$ matrices where $n$ is consistently assumed to be a positive integer (i.e. $> 0$). So, they don't deal with determinants of $0 \times 0$ matrices.

They do follow it up with a more general section describing determinants of linear operators on free modules over commutative rings with identity, of rank $n$. There they prove a result (on page 172) equivalent to our Definition 2. But, they implicitly only consider the case $n > 0$ again, since they make no remarks at all about the case $n = 0$.

(It might be worth noting that Hoffman and Kunze do state that the zero vector space has dimension $0$ and has the empty set as a basis (on page 45), so they do not entirely avoid discussing the $n = 0$ case everywhere.)

I also checked Algebra by Serge Lang, and he follows a similar line of exposition, and he too makes no remark about the case $n = 0$. But, throughout the book, Lang is not concerned with such matters, so this is not particularly surprising. Same is the case in Basic Algebra, Volume I, by Nathan Jacobson.


I don't recall offhand any reference which discusses this issue but I'll add my two cents in support of taking $\det{f} = 1$ as the definition for the unique map $f \colon V \rightarrow V$ on a zero-dimensional space.

This definition is consistent with various standard theorems in linear algebra so that one doesn't need to exclude the zero dimensional case as an exception. In fact, I can't think of a single theorem which will become false taking $\det{f} = 1$ as the definition while most will break if you take $\det{f} = 0$ as the definition and don't exclude the zero dimensional case. For example,

  1. An endomorphism on a finite dimensional vector space is invertible iff $\det(f) \neq 0$.
  2. The characteristic polynomial of an operator $f \colon V \rightarrow V$ on a finite dimensional vector space is monic of degree $\dim V$ and a scalar $\lambda \in \mathbb{F}$ is a root of the characteristic polynomial iff $\lambda$ is an eigenvalue of $f$. The characteristic polynomial of $f$ is defined as $\chi_f(x) = \det(x \cdot \operatorname{id} - f)$ so using the "1" convention it becomes $\chi_f(x) = 1$ which is indeed monic of degree zero and doesn't have any roots. Using the $0$ convention gives $\chi_f(x) = 0$ which has degree $-\infty$ and all scalars as roots.
  3. The minimal polynomial of $f$ divides the characteristic polynomial. Again, the minimal polynomial is the unique monic polynomial $m_f$ of minimal (non-zero) degree such that $m_f(f) = 0$. In the zero dimensional case, it becomes $1$ (indeed $m_f(f) = \operatorname{id}_V = 0$) and it divides the characteristic polynomial $1$. If the characteristic polynomial would be zero, this would be false.
  4. The characteristic polynomial of the restriction of an operator $g$ to an $g$-invariant subspace divides the characteristic polynomial of $g$. Since $\{ 0 \}$ is a legitimate $g$-invariant subspace, it makes sense to take the characteristic polynomial of $g|_{\{0\}} = f$ to be $1$ and not $0$.
  5. An orthogonal map on an inner product space should have determinant one.
  6. The determinant of the identity map is $1$.
  7. It might look silly but the idea of orienting a zero-dimensional vector space is important (for example, to state a general version of Stokes' theorem which includes the fundamental theorem of calculus as a special case). An orientation on a zero-dimensional real vector space is just a choice of $\pm 1$ which states whether the point is "positive or negative". The unique map on the zero-dimensional vector space does nothing (it is the identity map) so it should be orientation preserving and a map is orientation preserving iff $\det(f) > 0$.

Finally, let me say that in my mind, if one wants to define the determinant of the unique map $f \colon V \rightarrow V$ on the zero dimensional space, then the definition shouldn't depend on the choice of field $\mathbb{F}$ over which we are working on (this is more of a meta-mathematical statement). It would be ridiculous to define say $\det(f) = 2$ if $\mathbb{F} = \mathbb{R}$ while $\det(f) = 3$ if $\mathbb{F} = \mathbb{Z}_5$. Thus it leaves one with two sensible choices: $\det(f) = 0$ or $\det(f) = 1$. Since $\det(f) = 0$ breaks up so many theorems, it doesn't make sense to take it as the definition, better just leave it undefined and that's it.


Added: Almost any textbook which embraces the definition of the determinant via the exterior algebra will have as a result a definition for the zero dimensional case which will be $\det(f) = 1$. For such textbooks, it won't be a convention but a (trivial) result. For a specific example, see Algebra 1, Chapters 1-3 by Bourbaki. On page 525 they state that $\det([]) = 1$ but for them, this is not a convention or an ad hoc definition but a result proved from their definition of the determinant and the notion of a matrix.


Here's another argument for $\det () = 1$ (where $()$ denotes the empty matrix):

If $I_n$ is the $n\times n$ identity matrix, and $\lambda\in K$, then we have $\det (\lambda I_n) = \lambda^n$. Now we have $I_0=()$ and also $\lambda I_0=()$. Therefore $$\det() = \det I_0 = \det (\lambda I_0) = \lambda^0 = 1$$

Note that the $n\times n$ zero matrix $0_n$ is also of the form $\lambda I_n$, with $\lambda=0$. For $n>0$, this gives as determinant of the zero matrix $\det 0_n=0^n=0$.

For $n=0$, however, you get $0^0$ which according to whom you ask is either $1$ (thus confirming $\det()=\det 0_n=1$) or undefined (thus invalidating the formula when both $n=0$ and $\lambda=0$, thus still not giving a contradiction).

Also, the empty matrix is idempotent. For any idempotent matrix $A$ we have $\det A = \det A^2 = (\det A)^2$, and the only idempotent elements of $K$ are $0$ and $1$. But since the empty matrix is invertible, its determinant must be invertible as well. As $0$ is not invertible, the determinant therefore must be $1$.


I think this issue can be (and, in practice, indeed is) a matter of a definition, leaded by some convention, convenient in a provided framework, like an equality $0!=1$.

Looking for the required definition, I decided to base my answer on Wikipedia. Maybe it is not as canonical as, for instance, books by Nicholas Bourbaki (in particular, “Algebra” by Serge Lang), but I like it more (because its exposition is more natural and convenient for me), it is easy to access, and it is often referred to at Mathematics Stack Exchange (so, I guess, it is a rather reputable source here). But, most important, it is the only source I’m aware providing a definition of a determinant of an endomorphism, whereas a determinant is usually defined for a matrix, but not for a map.

According to Wikipedia, the determinant of a linear transformation $T:V\rightarrow V$ for some finite-dimensional vector space $V$ is defined to be the determinant of the matrix describing it, with respect to an arbitrary choice of basis in $V$. But if the only element of $V$ is $0$, then $V$ has no basis. Indeed, the basis should be a non-empty subset of $V$ because of the spanning property. But, from the other hand, the only element of $V$ is $0$, which cannot be included to a basis, because it violates the linear independence property. Since $V$ has no basis, there is no matrix describing $z$ with respect to a basis of $V$, so the determinant of $z$ is undefined.

I can't help but feel like this is all very silly, but clearly the answer can't be anything other than $1$. Is there anything wrong with giving this answer? Does it cause any problems with any other typical properties of the determinant? Does it simplify any definitions or theorems?

We can abstractly extend a definition of a notion to a wider domain. In the question you provided arguments for naturalness of extension of $\det z$ both to $1$ and to $0$. A bit similar ambiguity we have defining $0^0$ as a continuous extension of a function $0^x$ of $x^0$ when $x$ tends to the zero. At the beginning of the answer I already stated my opinion how such issues usually are resolved.