What is the importance of definite and semidefinite matrices?
Solution 1:
There are many uses for definite and semi-definite matrices. I can give just a few examples although undoubtedly I will be missing many.
Positive-definite matrices are the matrix analogues to positive numbers. It is generally not possible to define a consistent notion of "positive" for matrices other than symmetric matrices. As a consequence, positive definite matrices are a special class of symmetric matrices (which themselves are another very important, special class of matrices). It turns out that many useful matrices fall under this class such the covariance matrix, overlap matrices used in quantum chemistry and dynamical matrices used in calculation of molecular vibrations (which is positive semi-definite).
Definiteness is a useful measure for optimization. Quadratic forms on positive definite matrices
$$\mathbf{x}^\mathrm{T}A\mathbf{x}$$ are always positive for non-zero $\mathbf{x}$ and are convex. Analogous results hold for negative-definite matrices. This is a very desirable property for optimization since it guarantees the existences of maxima and minima. It is properties like these for example, that allow you to use the Hessian matrix to optimize multivariate functions.Perhaps equally (or more) important, especially to a mathematician, is the fact that the theory of (semi)definite matrices is an incredibly rich and beautiful field. There are chains of elegant results concerning these matrices, especially for positive-definite matrices. That is motivation enough.
Solution 2:
Positive definite matrices have applications in various domains like physics, chemistry etc. In CS, optimization problems are often treated as quadratic equations of the form $Ax=b$ where $x$ can be any higher degree polynomial. To solve such equations, we need to calculate $A^{-1}$ which is then used to find out $x=A^{-1}.b$.
Computing $A^{-1}$ is time consuming for complex higher rank matrices. Instead we use Cholesky’s decomposition: $A = L.L^T$, if the matrix $A$ is symmetric, positive definite then we can decompose it into lower triangular matrix $L$. Finding the inverse of this lower triangular matrix $L$ and its transpose $L^T$ is computationally efficient process. Hence we get huge performance gain on $x = (L^T)^{-1}.(L^{-1}).b$