Probability of having zero determinant

Solution 1:

Here is another (inductive) proof inspired by Marvis' suggestion below.

It relies on the fact that $\det A \ne0$ iff the columns of $A$ are linearly independent (li.). Let $\Delta_k = \{ (x_1,...,x_k) | x_i \in \mathbb{R}^n, \ x_1,...,x_k \mbox{ are not li.} \}$. Then if we show that $\Delta_n$ has Lebesgue measure zero, it will follow that the set $\{A | \det A = 0 \}$ has Lebesgue measure zero.

In fact, we will show that $m_k \Delta_k = 0$, for $k=1,...,n$, where $m_k$ is the Lebesgue measure on $\mathbb{R}^n \times\cdots \times \mathbb{R}^n$ ($k$ copies).

First we must show that $\Delta_k$ is measurable. If we let $\phi(x_1,...,x_k) = \min_{\|\alpha\| = 1} \| \sum \alpha_i x_i \|$, then we see that $\phi(x_1,...,x_k) = 0$ iff $\{x_1,...,x_k\}$ are not li. Since $\phi$ is continuous, and $\Delta_k = \phi^{-1} \{0 \}$ we see that $\Delta_k$ is Borel measurable.

It is straightforward to see that $\Delta_1 = \{0\}$, hence $m_1 \Delta_1 = 0$. Now suppose this is true for $\Delta_k$, with $1 \leq k < n$. Let $N = \mathbb{R}^n \times \Delta_k$. By assumption $m_{k+1} N = 0$ (since $m_k \Delta_k = 0$). Also, $N \subset \Delta_{k+1}$.

Consider a point $(x, x_1,...,x_k) \in \Delta_{k+1} \setminus N$ (note the indices on the $x$s). Then $\{x_1,...,x_k\}$ are li., but $\{x, x_1,...,x_k\}$ are not li. This can be true iff $x \in \mathbb{sp} \{x_1,...,x_k\}$, a $k$-dimensional hyperplane in $\mathbb{R}^n$ passing through $0$. Note that $m(\mathbb{sp} \{x_1,...,x_k\}) = 0$. Then using Fubini we have (with a slight abuse of notation) \begin{eqnarray} m_{k+1} (\Delta_{k+1} \setminus N) &=& \int 1_{\Delta_{k+1} \setminus N} \, d m_{k+1}\\ & = & \int 1_{\mathbb{sp} \{x_1,...,x_k\}}(x) 1_{\Delta_k^C}((x_1,...,x_k)) \, d x \, d(x_1,...,x_k)\\ & = & \int \left( \int 1_{\mathbb{sp} \{x_1,...,x_k\}}(x) \, dx \right) 1_{\Delta_k^C}((x_1,...,x_k)) \, d(x_1,...,x_k)\\ & = & \int m(\mathbb{sp} \{x_1,...,x_k\}) 1_{\Delta_k^C}((x_1,...,x_k)) \, d(x_1,...,x_k) \\ & = & 0 \end{eqnarray} Since $m_{k+1} (\Delta_{k+1} \cap N) = m_{k+1} N = 0$, we have $m \Delta_{k+1} = 0$. It follows that $m \Delta_k = 0$ for $k=1,...,n$.

If $\mu$ is any measure on $\mathbb{R}^n\times \cdots \mathbb{R}^n$ that is absolutely continuous with respect to the Lebesgue measure ($m_n$ in this case), then it is clear that $\mu \Delta_n = 0$ as well.

In particular, if $\mu$ can be expressed in terms of a joint probability density function, then $\mu \Delta_n = 0$. I believe this includes the case you intended in the question.

I mentioned this in the comments above, but think it is worth repeating here: http://www1.uwindsor.ca/math/sites/uwindsor.ca.math/files/05-03.pdf.

Solution 2:

A zero determinant is a linear constraint on the elements of the matrix. In other words, given all values except one, this constraint fixes the one value to what is necessary for the determinant to be 0. Looks like we are picking one fixed value of one (or more) of the entries in the matrix out of a continuous distribution, which would make it an event of 0 measure (in other words, probability of having a 0 determinant is 0 as long as we are picking from a continuous distribution).

Solution 3:

The probability is zero. We need only consider the 2-by-2 case.

WLOG, consider a uniform distribution on $[0,1]$. Then, generate the $A_{11}$ and $A_{21}$ entries independently.

The $A_{12}$ entry will then be some multiple $k$ of $A_{11}$: $$k = \frac{A_{12}}{A_{11}}.$$

In order for $A$ to be singular, then $A_{22} = kA_{21}$ exactly. This means that $A$ is singular if and only if a uniformly distributed random variable assumes a specific value, namely $kA_{21}$. The probability of a continuous random variable assuming a discrete value is zero.

Indeed, for some values of $A_{11}$ and $A_{21}$, the value $\frac{A_{21}}{A_{11}} A_{12} > 1$ is a possibility as well. For example, consider

$$ A =\begin{pmatrix} .25 & .75 \\ .5 & a \end{pmatrix}.$$

The only value of $a$ that will make this non-singular is outside the unit interval.

Why may we consider the 2x2 case only? For any singular $n$-by-$n$ matrix, it is possible to choose two rows and two columns such that the 2-by-2 matrix of their entries is singular.


Alternatively, consider any $n$-by-$n$ random matrix whole elements are uniformly chosen from $[0,1]$ (or $(0,1)$ -- it matters not).

Then, for the $i$th row and $j$th column, a necessary condition for the singularity of $A$ is that $A_{ij} = A_{pj} \frac{A_{iq}}{A_{pq}}$ for at least one value of $p = 1,\ldots,n,\ p \neq i$ and $q = 1,\ldots,n,\ q \neq j$.

In other words, if you consider any randomly generated element in the matrix, then it must be a scalar multiple of some element in the same column, and that scalar multiple must be the ratio of the elements from the corresponding rows in a different column.

So, you have $$P(\det A = 0) = \sum_{p,q} P\left(A_{ij} = A_{pj}\frac{A_{iq}}{A_{pq}}\right).$$

The sum is finite, and each individual probability is the probability that $A_{ij}$ assumes a distinct value in a continuous distribution.

One may notice that the probability sum is missing the "and" terms -- the sum is basically the probability of one event or another event, many times. But inclusion-exclusion demands we subtract off the probability that both events happen. However, this is obviously zero, if $A_{ij} = \alpha$ and $A_{ij} = \beta$, then we see that $\alpha = \beta$ and this equality leads to the determinant of a 2-by-2 matrix being zero.

Solution 4:

Check this related work:
http://www.inf.kcl.ac.uk/staff/ccooper/papers/Li_Matrix_GFt.pdf

It considers a more general case....