Proof and Intuition of the Determinant Formula?
Solution 1:
You seem to have rediscovered the adjugate matrix. I suppose you could try to use it to define the determinant, but I'd be hesitant that it would be well-defined, i.e. independent of whatever choices you've made while row-reducing. It is basically another way to think of Cramer's rule.
The indisputably* conceptually correct way to introduce determinants is through exterior algebra using the induced map on the highest exterior power. This is much too technical for virtually anyone who's just learning it, unfortunately. So different authors will pick random bits and pieces of the true picture that they think are sufficiently palatable to their audience.
But I can easily give you a flavor for what's going on and why inversions show up naturally, if you're willing to take a bit on faith.
Suppose $\vec{u}, \vec{v}$ are 2D vectors. Let $f(\vec{u}, \vec{v})$ be the area of the parallelogram they determine. Imagine replacing $\vec{u}$ with $t\vec{u}$ for a scalar $t$ which varies from $1$ to $-1$. We have $f(t\vec{u}, \vec{v}) = |t|f(\vec{u}, \vec{v})$. That absolute value sign is a bit strange, though--it prevents the function from being smooth! It feels like maybe when $t$ passes through zero, we should just use a "negative area". This ends up being the correct choice. In 3D, the notion of "orientation" ends up being extremely natural if you do anything with, say, computer graphics. So we effectively just get rid of the absolute value sign and introduce a signed area. More generally, we'd be interested in the signed hypervolume of the $n$-dimensional parallelogram determined by $n$ vectors in $n$-dimensional space.
In 2D, if you play around with it, you'll find any reasonable signed area function $A(\vec{u}, \vec{v})$ must satisfy at least three properties:
- Scaling: $A(c\vec{u}, \vec{v}) = cA(\vec{u}, \vec{v})$
- Linearity: $A(\vec{u}_1 + \vec{u}_2, \vec{v}) = A(\vec{u}_1, \vec{v}) + A(\vec{u}_2, \vec{v})$
- Alternating: $A(\vec{u}, \vec{v}) = -A(\vec{v}, \vec{u})$
Note that (3) says $A(\vec{u}, \vec{u}) = 0$, which is obvious from the area interpretation (whew!).
Ok, what if we had the coordinates of $\vec{u}$ and $\vec{v}$ in terms of the standard basis vectors--what would $A$ be in those coordinates? That is, suppose $\vec{u} = a_{11} \vec{e}_1 + a_{21} \vec{e}_2$, $\vec{v} = a_{12} \vec{e}_1 + a_{22} \vec{e}_2$. Liberally using properties (1)-(3), we compute:
\begin{align*} A(\vec{u}, \vec{v}) &= A(a_{11} \vec{e}_1 + a_{21} \vec{e}_2, a_{12} \vec{e}_1 + a_{22} \vec{e}_2) \\ &= a_{11} a_{21} A(\vec{e}_1, \vec{e}_1) + a_{11} a_{22} A(\vec{e}_1, \vec{e}_2) + a_{21} a_{12} A(\vec{e}_2, \vec{e}_1) + a_{21} a_{22} A(\vec{e}_2, \vec{e}_2) \\ &= (a_{11} a_{22} - a_{12} a_{21}) A(\vec{e}_1, \vec{e}_2) \\ &= (a_{11} a_{22} - a_{12} a_{21}). \end{align*}
This is exactly the determinant of the 2x2 matrix listing the coordinates of $\vec{u}$ and $\vec{v}$ in its columns.
You can play the same game with $n \times n$ matrices. You'll see quickly that the resulting expression will be a sum over permutations, and the only question will be what sign to use. The inversion number is simply the number of swaps needed to straighten out the relevant term, so it's got the right parity!
Ok, but existence of a function $A$ satisfying (1)-(3) isn't necessarily clear. To prove it rigorously, you reverse the whole thing, first defining inversion numbers and studying their basic properties, then using the Laplace expansion formula to define the determinant, then you show it actually satisfies properties (1)-(3). Or you could do a higher-tech version of the same thing by introducing the exterior algebra. But at some point you're going to have to show that the $n$th exterior product of an $n$-dimensional vector space is $1$-dimensional (and not $0$-dimensional), which will require some sort of construction like this no matter what.
*(Hah!)