Determining whether a symmetric matrix is positive-definite (algorithm)

Solution 1:

Mathcast had it; in fact, in practical work, one uses the Cholesky decomposition $\mathbf G\mathbf G^T$ for efficiently testing if a symmetric matrix is positive definite. The only change you need to make to turn your decomposition program into a check for positive definiteness is to insert a check before taking the required square roots that the quantity to be rooted is positive. If it is zero, you have a positive semidefinite matrix; if neither zero nor positive, then your symmetric matrix isn't positive (semi)definite. (Programming-wise, it should be easy to throw an exception within a loop! If your language has no way to break a loop, however, you have my pity.)

Alternatively, one uses the $\mathbf L\mathbf D\mathbf L^T$ decomposition here (an equivalent approach in the sense that $\mathbf G=\mathbf L\sqrt{\mathbf D}$); if any nonpositive entries show up in $\mathbf D$, then your matrix is not positive definite. Note that one could set things up that the loop for computing the decomposition is broken once a negative element of $\mathbf D$ is encountered, before the decomposition is finished!

In any event, I don't understand why people are shying away from using Cholesky here; the statement is "a matrix is positive definite if and only if it possesses a Cholesky decomposition". It's a biconditional; exploit it! It's exceedingly cheaper than successively checking minors or eigendecomposing, FWIW.

Solution 2:

I don't think there is a simpler way than computing a decomposition or determinant unless your matrix has a special form. For example if it is a sample covariance matrix then it is positive semidefinite by construction.

Solution 3:

Other possibilities include using the conjugate gradient algorithm to check positive-definiteness. In theory, this method terminates after at most n iterations (n being the dimension of your matrix). In practice, it may have to run a bit longer. It is trivial to implement. You can also use a variant of the Lanczos method to estimate the smallest eigenvalue of your matrix (which is much easier than computing all eigenvalues!) Pick up a book on numerical linear algebra (check the SIAM collection.)

At any rate recall that such methods (and the Cholesky decomposition) check numerical positive-definiteness. It is possible for the smallest eigenvalue of your matrix to be, say, 1.0e-16 and for cancellation errors due to finite-precision arithmetic to cause your Cholesky (or conjugate gradient) to break down.

Solution 4:

Perhaps someone has an iterative approach or some other approximate approach, but it looks like to be sure, one has to calculate determinants and apply Sylvester's criterion (as you pointed out).

Sylvester's states that iff, for all $k<n$, the $\det(A_k) > 0$, where $A_k$ is the $k$'th principal minor, then the matrix is positive definite. The only deterministic, efficient, algorithm to calculate determinants that I know of is the Bareiss algorithm, for which you can see Bareiss's original paper or Chee Yap's excellent reference here.

The Bareiss algorithm, in its intermediate steps, ends up calculating the determinant of the $k$ minors of the matrix for all $k<n$, so you get a simple test for positive definiteness should you end up implementing the Bareiss algorithm yourself.

I'd be interested to know if there are any iterative methods that would give you some confidence of a matrix's positive-definiteness.