We know that a square matrix is a covariance matrix of some random vector if and only if it is symmetric and positive semi-definite (see Covariance matrix). We also know that every symmetric positive definite matrix is invertible (see Positive definite). It seems that the inverse of a covariance matrix sometimes does not exist.

Does the inverse of a covariance matrix exist if and only if the covariance matrix is positive definite? How can I intuitively understand the situation when the inverse of a covariance matrix does not exist (does it mean that some of the random variables of the random vector are equal to a constant almost surely)?

Any help will be much appreciated!


If the covariance matrix is not positive definite, we have some $a \in \mathbf R^n \setminus \{0\}$ with $\def\C{\mathop{\rm Cov}}\C(X)a = 0$. Hence \begin{align*} 0 &= a^t \C(X)a\\ &= \sum_{ij} a_j \C(X_i, X_j) a_i\\ &= \mathop{\rm Var}\left(\sum_i a_i X_i\right) \end{align*} So there is some linear combination of the $X_i$ which has zero variance and hence is constant, say equal to $\alpha$, almost surely. Letting $H := \{x \in \mathbf{R}^n: \sum_{i} a_i x_i = \alpha\}$, this means, as @drhab wrote $\mathbf P(X \in H) = 1$ for the hyperplane $H$.


As is nicely explained here What's the best way to think about the covariance matrix?

if $A$ is the covariance matrix of some random vector $X\in\mathbb{R}^n$, then for every fixed $\beta\in\mathbb{R}^n$, the variance of the inner product $\langle\beta,X\rangle$ is given by $\langle A\beta,\beta\rangle$. Now, if $A$ is not invertible, there exists a non-zero vector $\beta\neq 0$ such that $A\beta=0$, and so $\langle A\beta,\beta\rangle = 0$, which means that the variance of $\langle X,\beta\rangle$ is zero.

Proposition 1. If the covariance matrix of a random vector $X$ is not invertible then there exists a non-trivial linear combination of the components of $X$ whose variance is zero.

This is closely related to what drhab mentioned in a comment above - for if the variance of $\langle X,\beta\rangle$ is zero, then $X-a\beta$ is almost surely orthogonal to $\beta$, for some constant $a$.In fact an alternative but equivalent formulation to the proposition above is:

Proposition 2. If the covariance matrix of a random vector $X$ is not invertible then there exists $\beta\neq 0$ such that a translate of $X$ is orthogonal to $\beta$ with probability one.


The question actually has little to do with probability theory, the observation holds for any square matrix regardless of it's origin.

It's easy to prove by considering the eigenvalues of the matrix. If and only if all of them are non-zero is the matrix invertible. It follows from the characteristic equation $\det (A-\lambda I)=0$, if $\lambda = 0$ is a solution then and only then $\det(A-0I) = \det(A) = 0$.

Positive definite means that all eigenvalues are positive, but positive semi-definite means only that they are non-negative.


This are some thoughts. Let $x$ be a random vector whose entries are i.i.d. Let $A$ be any square matrix which is not full rank. Then the covariance matrix of the random vector $y=Ax$ is not invertible. To see this, note that $E[Axx^TA]=AE[xx^T]A^T$. Thus, regardless of the rank of $E[xx^T]$, covariance matrix of $y$ will not be invertible.