Why is anti-symmetry a desirable quality in determinants?
I hear the determinant of matrix can be defined using 3 facts. 1. It is multilinear. 2. It is anti-symmetric. 3. It is scaled so the determinant of the identity is 1.
But, I don't understand why anti-symmetric is there? Why do people want determinants to be anti-symmetric?
Solution 1:
My answer is pretty much taken from Winitzki's Linear Algebra via Exterior Products, a very good book available legitimately for free online.
Here's the idea. Let's say we have two vectors, $\mathbf{a}, \mathbf{b} \in \mathbb{R}^2$. It's not hard to show that the area of the parallelogram with vertices $\mathbf{0}, \mathbf{a}, \mathbf{b}$ and $\mathbf{a} + \mathbf{b}$ is $|\mathbf{a}| \cdot |\mathbf{b}| \sin\theta$, where the angle between our two vectors is $\theta$.
Let's give this function a name; $Ar(\mathbf{a}, \mathbf{b})$. As we demand linearity, then it must be the case that
\begin{align*}Ar(\mathbf{a + b},\mathbf{a + b}) &= Ar(\mathbf{a} , \mathbf{a}) +Ar(\mathbf{a} , \mathbf{b}) +Ar(\mathbf{b} , \mathbf{a}) + Ar(\mathbf{b} , \mathbf{b})\\ &= 0 + Ar(\mathbf{a} , \mathbf{b}) + Ar(\mathbf{b} , \mathbf{a}) + 0\\ &= 0,\end{align*}
where all the zeros come from the fact that the area $Ar(\mathbf{x} , \mathbf{x})$ can only sensibly be $0$ for all vectors $\mathbf{x}$.
Thus, we're forced to set $Ar(\mathbf{b} , \mathbf{a}) = -Ar(\mathbf{a} , \mathbf{b})$, in order to get the sane result that $Ar(\mathbf{a+b} , \mathbf{a+b}) = 0$.
This isn't the whole story, of course, but that's the idea: vanishing on a linearly dependent input list and (multi)linearity force us to use antisymmetry. In order to get sensible results in terms of volumes of parallelopipeds, we need oriented volume.
That, in my opinion, is the best motivator for the determinant: thinking in terms of signed volumes. It has a geometric intuition and even 'detects' linear dependence, like we saw above. It provides a great way to see why antisymmetry is essentially required.
Solution 2:
Determinant are useful beasts. For many reasons: they characterize invertibility of matrices, allow us to solve linear equations, they show up in the formula for change of variables in multiple integrals, and what not. And, mind you, it is not that we put them there: they show up there of their own volition: that is how (mathematical) nature is.
Once we have observed that determinants are useful, we have the problem of describing what the determinant is. Indeed, as we notice that there is this thing that shows up all over the place, we mght just as well be precise about what it is that we are finding in all this different places! There are many ways to do this. We can provide the huge ugly formula with the sum over permutations, for example, and others.
How do we choose which description of determinants is best? Well, we want it to be as concise as possible, simple, conceptual, flexible. We want to be able to prove things about the determinant, and the description we pick should make this easy, not hard. And so on.
One possible description of determinants is the one you mention. It is very succesful in all those respects.
In other words, it is not that we decide that determinants should be antisymmetric: they are antisymmetricindependently of our wishes. And it turns out that we can take advantage of that to describe what a determinant is.
There are two approaches to defining something.
First, you can define an object by construction, just as one does when one says «the determinant of a matrix is the scalar one ges when one does this ugly computation». This is a good approach at times, but then one is left with proving that the object we have explicitly constructed has all the properties we want it to have, and this can be more or less difficult, depending on the circumstances.
Alternatively, if we study the object in depth, it might be the case that we come up with the observation that it has properties X, Y and Z and that in fact it isthe only object which has those three properties. This allows us to define the object as «The only object which has properties X, Y, and Z». Now, this definition has a problem: we have to check that an object having those three properties indeed exists and is unique.
Historically, most objects get defined initially in the first way, and then, as out knowledge of its properties increases, we redefine them in the second style.
Solution 3:
Other answers discuss the background better, but I'll just to go for the question of why anti-symmetry is required in that description of determinants. The answer is simply that without it uniqueness of the form described would fail, and it would fail miserably. (By the way, this nice description of determinants has one aspect that you, and often many others, overlooked: mentioning multi-linearity and anti-symmetry (alternating property), only makes sense if you mention that this is when regarding the determinant as a function of the columns of matrix; an alternating multilinear form of all $n^2$ entries would be something radically different, and impossible.)
A bilinear form $B$ on an $n$-dimensional space is determined by giving the $n^2$ values $B(e_i,e_j)$ where $e_i,e_j$ independently run through a chosen basis $e_1,\ldots,e_n$, and (without symmetry condition) these values can be chosen arbitrarily. Similarly a multilinear form$~M$ of $k$ arguments that are vectors in dimension$~n$ is determined by giving the $n^k$ values $M(e_{i_1},e_{i_2},\ldots,e_{i_k})$ where each argument independently runs through the chosen basis, and these values can again be chosen arbitrarily. Thus the space of such $k$-linear forms has dimension $n^k$; in particular the dimension of the space of all $n$-linear forms is a whopping $n^n$. It is clear that one cannot single out the determinant as element of this space by merely imposing its value at the identity matrix (condition 3).
By additionally imposing symmetry conditions, subspaces of smaller dimension can be defined. For bilinear forms, imposing symmetry pairs up most of the values $B(e_i,e_j)$, and leaves $\frac{n^2+n}2=\binom{n+1}2$ of them to be chosen freely. Instead imposing anti-symmetry would relate the same pairs (but differently), and in addition force the "diagonal" values $B(e_i,e_i)$ to be zero, leaving a dimension of $\frac{n^2-n}2=\binom n2$. Similarly, for multilinear forms of $k$ arguments imposing full symmetry defines a subspace of dimension $\binom{n+k-1}k$, while instead imposing full anti-symmetry defines a subspace of dimension$~\binom nk$. And for $k=n$ the latter means something miraculous: the subspace has dimension $\binom nn=1$ (remember that this is out of an original dimension$~n^n$). (By contrast the space of fully symmetric $n$-linear alternating forms still has dimension $\binom{2n-1}n$; one would still need some very strong additional restriction to single out one special such form.) So the alternating condition has succeeded in eliminating almost all the freedom in choosing a form, leaving just enough freedom to avoid being left with the zero form only. The condition 3 is now precisely what is needed to single out our single very special form, the determinant.
Solution 4:
I don't think that this property was ever "desired". The determinants naturally come out of linear algebra theory when you want to solve linear systems of equations and test linear dependency, so they are somehow invariant to (rank-preserving) linear combinations.
Indeed, a determinant is zero when its rows (columns) are linearly dependent, and in particular when they are equal. This implies antisymmetry. (If $f(x,y)=-f(y,x)$, then necessarily $f(x,x)=0$.)