In terms of complexity, is there a quicker way of checking if a matrix is nonsingular than computing the determinant?
Solution 1:
There are two methods that are quite good methods to check for singularity:
As pointed out in comments, Gaussian elimination is a very efficient method for checking it.
Another way is using conditional number of the matrix. As we know, If conditional number is $\infty$ then matrix is non singular. So calculate the condition number of your matrix and compare with $\frac {1}{\text{e.p.s.}}$ . Now the problem reduces to finding condition number. As far as I know, finding conditional number is easier than finding determinants.
EDIT:
All I know about finding Condition number...
If your matrix is triangular, you can approximate condition number by $\kappa(A) \geq \frac{\max_i(|a_{ii}|)}{\min_i(|a_{ii}|)}$, it is a reasonable approximation but gives some wrong results. Note that if we use QR factorisation, we can say that $\kappa (A)=\kappa(Q) \text{ if } A=QR$. Though for upper triangular matrix, we can find conditional no. in $O(n)$.
For tridiagonal matrix it can be found in $O(n).
For other matrices either first perform QR, then use 1, or try to find the singular values for the matrix as condition number is the ratio of largest singular value to smallest. Now one way of doing is to use SVD decomposition or directly finding the singular values. Though SVD decomposition takes $O(n^3)$. I think algorithms for just finding singular values exist and (should be) faster. You'll have to search for them..
Remaining is my personal view and you are free to disagree, hence I've hidden it :-)
Also I think that practicality of algorithms should be taken in consideration. Yes, it is true that we can find determinant in $O(n^3)$, same as Gaussian elimination, but Gaussian elimination is easy to implement whereas determinant is not so easy to implement. (For example, I can implement Gaussian elimination but I don't even know what the algorithm for determinant in $O(n^3)$ is?
Solution 2:
I asked this question recently on Twitter, and did some digging and found what might be a better answer than these two.
Volker Strassen in 1969 showed that matrix inversion, calculating the determinant, gaussian elimination and matrix multiplication are all equally fast / slow. Currently the fastest algorithm is O(n^2.3728596)
However, figuring out if a square matrix can be inverted is (perhaps) a simpler problem: a square matrix is invertible if its rank = its dimension.
There seem to be faster algorithms to calculate rank if the matrix is sparse:
- https://cstheory.stackexchange.com/questions/5750/what-is-the-fastest-algorithm-to-compute-rank-of-a-rectangular-matrix
- http://www-scf.usc.edu/~hoyeeche/papers/matrix-rank_conf.pdf
This does seem to suggest that in a typical / random dense square matrix, determining invertibility is basically as hard as inverting it. I'd love to know if this is clearly proven.
Note: There might be additional fanciness possible if the matrix only has small integers