Determinant of large matrices: there's GOTTA be a faster way
Solution 1:
No, this is not the way that any (sane) person would compute a determinant. This is not even the way a computer would calculate a determinant! It requires a sum over $n!$ terms, which quickly becomes infeasible even for a computer, around $n = 15$ or so. An elementary way to compute a determinant quickly is by using Gaussian elimination.
We know a few facts about the determinant:
- Adding a scalar multiple of one row to another does not change the determinant.
- Interchanging two rows negates the determinant.
- Scaling a row by a constant multiplies the determinant by that constant.
So, now take the matrix
$$ A = \begin{bmatrix}-4 & 3 &3 \\ 8 & 7 & 3 \\ 4 & 3 & 3\end{bmatrix} $$
By fact (1) above, I can add twice the top row to the middle row, and also the top row to the bottom row, without affecting the determinant. So:
$$ \det A = \det \begin{bmatrix}-4 & 3 &3 \\ 0 & 13 & 9 \\ 0 & 6 & 6\end{bmatrix}$$
Now, I can interchange the bottom two rows, and and scale the row with only $6$'s, at a cost of $-6$:
$$ \det A = - \det \begin{bmatrix}-4 & 3 &3 \\ 0 & 6 & 6 \\ 0 & 13 & 9 \end{bmatrix} = - 6 \det \begin{bmatrix}-4 & 3 &3 \\ 0 & 1 & 1 \\ 0 & 13 & 9 \end{bmatrix}$$
Now I can subtract 13 lots of the middle row from the bottom row:
$$ \det A = - 6 \det \begin{bmatrix}-4 & 3 &3 \\ 0 & 1 & 1 \\ 0 & 13 & 9 \end{bmatrix} = - 6 \det \begin{bmatrix}-4 & 3 &3 \\ 0 & 1 & 1 \\ 0 & 0 & -4 \end{bmatrix}$$
Now the matrix is upper-triangular, and so the determinant is just the product of the diagonal entries. So we have
$$ \det A = -6 (-4 \times 1 \times -4) = -96 $$
So there you have it: computing a determinant is as easy as finding row-echelon form.
Solution 2:
In general, determinants of large matrices are not computed by cofactor expansion but rather by factoring the matrix into factors whose determinants are easy to compute.
For example, you can factor an $n$ by $n$ matrix $A$ as
$A=P^{T}LU$
where $P$ is a permutation matrix, $L$ is lower triangular, and $U$ is upper triangular. This computation takes $O(n^{3})$ time for an $n$ by $n$ matrix $A$.
Since
$\det(A)=\det(P^{T})\det(L)\det(U)$
and the determinants of $P^{T}$, $L$, and $U$ are easy to compute (the determinant of a lower or upper triangular matrix is the product of the diagonal elements and you can easily do cofactor expansion on a permutation matrix), you can quickly find the determinant of $A$.
If you want to try a computational experiment, test MATLAB's det() function on randomly generated matrices of size $n$ by $n$ for $n=1000, 2000,\ldots, 10000$ and use tic/toc to see how long the computation takes.
Solution 3:
If the entries of your matrix belong to a field, then you can compute the determinant easily using either LPU decomposition or PLU decomposition. These algorithms take $O \left(n^3\right)$ time, where $n$ is the size of the matrix.
If the entries of your matrix belong to an arbitrary commutative ring, then there are still $O \left(n^4\right)$-time algorithms to compute the determinant. See Günter Rote, Division-free algorithms for the determinant and the Pfaffian: Algebraic and Combinatorial Approaches, §2. (If I understand correctly, the rough idea of at least one of the algorithms is to replace the matrix $A \in R^{n\times n}$ by the matrix $1 - At$ over the power series ring $R \left[\left[t\right]\right]$, then compute the determinant of the latter via LU decomposition (which always exists in the power series ring), and then obtain $\det A$ as a coefficient of this polynomial. Of course, power series per se are uncomputable, but here only the first few coefficients need to be taken care of.)
Of course, the algorithms cannot do magic. The running time estimates of $O \left(n^3\right)$ and $O \left(n^4\right)$ assume that the fundamental operations of the base ring ($+$, $\cdot$, $-$ and taking inverses in the case of a field) require constant time and the overhead of storing and copying matrix entries is negligible. This assumption is justified if the base ring is a finite field or (to some extent) if the base "ring" is the floating-point reals (although these don't really form a ring, so you might end up with completely wrong results due to numerical instability), but not if the base ring is the integers (because integers get harder to work with the larger they become), the rational numbers or a polynomial ring. When the entries of your matrix are algebraically independent indeterminates, then you should not expect anything too fast, at least if you want the result in expanded form; after all, the result will simply be the general formula for the determinant of an $n \times n$-matrix, which "contains" a list of all $n!$ permutations of $\left\{1,2,\ldots,n\right\}$, and clearly such list requires at least $O \left(n!\right)$ time to write down! There might be some faster algorithms that result in non-expanded versions (similarly to Horner's scheme for polynomial evaluation), but I wouldn't expect anything with polynomial running time unless you allow the algorithm to return a recursion instead of an explicit sum-of-products-sums-of-products-of-etc..
Solution 4:
Despite appearances this exercise was not a waste of time. Denote the matrix in quesation by $A_{n \times n}$. Since the determinant is a rotational invariant, for every $n\ge 2$, it is possible to express it in a closed form as a function of all other rotational invariants, i.e traces of powers of this matrix $\left( Tr[A^p] \right)_{p=1}^n$. The formula in question reads: \begin{equation} \det(A) = \sum\limits_{m=1}^n \frac{(-1)^{m+n}}{m!} \cdot \sum\limits_{\begin{array}{r} p_1+p_2+\cdots+p_m=n \\ p_1\ge 1,\cdots,p_m\ge 1\end{array}} \prod\limits_{q=1}^m \frac{Tr[A^{p_q}]}{p_q} \end{equation}
Here the second sum on the right hand side runs over all partitions of $n$ into strictly positive integers.
See Calculate a multiple sum of inverse integers. for the derivation.
Denote ${\mathcal S}_n := n! \det(A)$ and $a(p) := Tr[A^p]$ for $p\ge 1$ then we have: \begin{eqnarray} {\mathcal S}_2 &=& a(1)^2-a(2)\\ {\mathcal S}_3 &=& a(1)^3-3 a(2) a(1)+2 a(3)\\ {\mathcal S}_4 &=& a(1)^4-6 a(2) a(1)^2+8 a(3) a(1)+3 a(2)^2-6 a(4)\\ {\mathcal S}_5 &=& a(1)^5-10 a(2) a(1)^3+20 a(3) a(1)^2+15 a(2)^2 a(1)-30 a(4) a(1)-20 a(2) a(3)+24 a(5)\\ {\mathcal S}_6 &=& a(1)^6-15 a(2) a(1)^4+40 a(3) a(1)^3+45 a(2)^2 a(1)^2-90 a(4) a(1)^2-120 a(2) a(3) a(1)+144 a(5) a(1)-15 a(2)^3+40 a(3)^2+90 a(2) a(4)-120 a(6)\\ {\mathcal S}_7 &=& a(1)^7-21 a(2) a(1)^5+70 a(3) a(1)^4+105 a(2)^2 a(1)^3-210 a(4) a(1)^3-420 a(2) a(3) a(1)^2+504 a(5) a(1)^2-105 a(2)^3 a(1)+280 a(3)^2 a(1)+630 a(2) a(4) a(1)-840 a(6) a(1)+210 a(2)^2 a(3)-420 a(3) a(4)-504 a(2) a(5)+720 a(7) \end{eqnarray}
It is not hard to see that in each of those formulas the coefficients in front of the product of powers of traces have the property that they sum up to zero and that their absolute values sum up to $n!$. In the table below I show the number of different terms on the right hand side $\nu(n)$ as a function of the dimension $n$. We have: \begin{array}{rr} n & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16\\ \nu(n) & 2 & 3 & 5 & 7 &11 & 15 & 22 & 30 & 42 & 56 & 77 & 101 & 135 & 176 & 231 \end{array}
Therefore it is completely not true that with this method we need to carry out $n!$ calculations. As a matter of fact for $n=49$ there are only $173525$ terms to sum up (see https://oeis.org/A000041/list).
Solution 5:
A famous formula says determinant is product of eigenvalues. If you can find eigenvalues faster than doing such an expansion, then it could be useful
$$\det({\bf A}) = \prod_{i=1}^N \lambda_i({\bf A})$$
One way to find eigenvalues numerically is power method. You will get good estimate quickly. In terms of complexity $O(n^2)$ for matrix-vector multiplication, $n$ eigenvalues to find, that gives $O(n^3)$
All in all you will get away with a small polynomial in complexity compared to the horrendous combinatorial $O(n!)$ complexity of doing what you are doing right now.