The determinant function is the only one satisfying the conditions

I have thought about your problem for a while now and I think there is a nice slick way to do this. Consider the space $W$ of all multilinear alternating forms $f$ in $k$ - variables

$$f : V \times \ldots \times V \to \Bbb{C}.$$

We claim that there is a canonical isomorphism between $W$ and $(\bigwedge^k V)^\ast$. Indeed, this should be clear because given any $f \in W$, the universal property of the $k$ - th exterior power tells us that there is a unique linear map $g \in (\bigwedge^k V)^\ast$ such that $f = g \circ \iota$ where $\iota : V \times \ldots \times V \longrightarrow \bigwedge^k V$ is the canonical mapping that sends the tuple $(v_1,\ldots,v_k)$ to $v_1 \wedge \ldots \wedge v_k$. Conversely given any $h \in (\bigwedge^k V)^\ast$ we can precompose it with $\iota$ to give us a mapping from $V \times \ldots \times V \to \Bbb{C}$.

In summary, we can use these facts to give us a canonical isomorphism between $W$ and $(\bigwedge^k V)^\ast$. If we put $k = n$, where $n = \dim V$ then

$$ 1= \dim_{\Bbb{C}} \bigwedge\nolimits^{\!k}V = \dim_{\Bbb{C}} \left(\bigwedge\nolimits^{\!k} V\right)^\ast $$

from which it follows that $W$ is one dimensional. In other words, any $f \in W$ is a scalar multiple of $\det$, where

$$\det : V \times V\times \ldots \times V \longrightarrow \Bbb{C}$$

is the mapping that sends the tuple $(v_1,\ldots, v_n)$ to the determinant of the matrix whose columns are the vectors $v_1, v_2, \ldots, v_n$. Now here comes the killer blow: Suppose we demand that an alternating multilinear $f$ be such that $f(e_1,\ldots,e_n) = 1$ where the $e_i$ are the standard basis vectors of $\Bbb{C}^n$. Then because

$$f(v_1,\ldots,v_n) = c\cdot \det(v_1,\ldots,v_n)$$

for some constant $c$, shoving in $(v_1,\ldots,v_n) = (e_1,\ldots,e_n)$ we must have that

$$\begin{eqnarray*} 1 &=& f(e_1,\ldots, e_n) \\ &=& c\cdot \det(e_1,\ldots,e_n) \\ &=& c \end{eqnarray*}$$

because the determinant of the identity matrix is $1$. Consequently we have shown:

Any alternating multilinear form in $\dim V$ number of variables with the value of the form on the tuple $(e_1,\ldots,e_n)$ being $1$ must be equal to the determinant.

$$\hspace{6in} \square$$


An alternative way is the Gaussian elimination: for a given $n\times n$ matrix $A$ with rows $r_1,..,r_n$, the following steps are allowed to use, in order to arrive to the identity matrix or one with a zero row (by the linearity, if $A$ has a zero row, the 'Artinian determinant' has to be zero).

  1. Add a scalar multiple of a row $r_j$ to another row $r_i$, i.e.: $i\ne j$ and $$r_i':= r_i+\lambda r_j$$
  2. Multiply a row by a nonzero scalar, i.e.: $\lambda\ne 0$ and $$r_i':=\lambda\cdot r_i$$
  3. Exchange 2 rows (can also be obtained by 1. and 2.)

Let's assume, we have two 'Artinian determinants': $D$ and $D'$. Using the above mentioned fact that every matrix can be transformed to the indentity or with a zero row, we will have $D=D'$, because 1. keeps both $D$ and $D'$ (why?), 2. multiplies both $D$ and $D'$ by $\lambda$, and 3. by $-1$.


A basic fact about linear functions is that they are completely determined by their values on a basis of the vector space. For a multi-linear function this means (repeating this statement for each argument) that they are determined by their values where each argument independently runs through a basis of the vector space. For a function of a matrix that is linear in the rows, it means that the function is determined by the values it takes for matrices for which each row has a single entry $1$ and all other entries $0$. Concretely if such a function is written $f(v_1,\ldots,v_n)$, the arguments being rows of a matrix $A$, then by multi-linearity $$ f(A)=\sum_{j_1,j_2,\ldots,j_n=1}^n a_{1,j_1}a_{j_2,2}\ldots a_{n,j_n} \, f(e_{j_1},e_{j_2},\ldots,e_{j_n}), $$ where $e_k$ is the $k$-th standard basis vector viewed as a row.

Now we must take into account that $f$ vanishes whenever two adjacent rows are equal. This implies directly that in the above summation one can drop any terms for which $j_i=j_{i+1}$ for some $i$. But also, by a standard "polarisation" argument (namely that $g(x+y,x+y)=g(x,x)+g(x,y)+g(y,x)+g(y+y)$ for bilinear $g$, so $g(x,y)=-g(y,x)$ if in addition $g$ vanishes on equal arguments), $f$ changes sign whenever we interchange two adjacent rows. So if $j_i>j_{i+1}$ for some $i$, then we have $$ f(e_{j_1},e_{j_2},\ldots,e_{j_n}) =-f(e_{j_1},e_{j_2},\ldots,e_{j_{i+1}},e_{j_i},\ldots,e_{j_n}), $$ and the sequence of indices $j_1,j_2,\ldots,j_{i-1},j_{i+1},j_i,j_{i+2},\ldots,j_n$ on the right, in which $j_i$ and $j_{i+1}$ have been interchanged, has one less inversion than the sequence on the left (an inversion of a sequence being a pair of positions where the term in the left position is strictly larger than the one in the right position). (You may notice I am re-doing a proof that any permutation is a composition of adjacent transpositions; one could also use that fact to show that any permutation of the arguments of $f$ affects the value by the sign of that permutation.)

Now for any sequence $(j_1,j_2,\ldots,j_n)$ other than $(1,2,\ldots,n)$, we either find that $f(e_{j_1},e_{j_2},\ldots,e_{j_n})$ is zero, or that it is determined by a similar value of $f$ but at a sequence of indices with strictly less inversions. It follows (by induction on the number of inversions) that all such terms are determined by $f(e_1,\ldots,e_n)$ alone. Finally it was given that $f(e_1,\ldots,e_n)=1$, so $f$ is completely determined.

As a bonus, this argument gives the explicit Leibniz formula for the determinant, once you check that $f(e_{\pi_1},e_{\pi_2},\ldots,e_{\pi_n})=\operatorname{sg}(\pi)$ for any permutation $\pi$ and that $f(e_{j_1},e_{j_2},\ldots,e_{j_n})=0$ for any non-permutation $(j_1,j_2,\ldots,j_n)$.