Is this determinant always non-negative?

For any $(a_1,a_2,\cdots,a_n)\in\mathbb{R}^n$, a matrix $A$ is defined by

$$A_{ij}=\frac1{1+|a_i-a_j|}$$

Is $\det(A)$ always non-negative? I did some numerical test and it seems to be true, but I have no idea how to prove it. Thanks!


Here is an analytic proof (which I think I have learnt somewhere else, although not in this form). We first show an algebraic fact:

Theorem 1. Let $\mathbb{K}$ be a commutative ring, and let $n\in\mathbb{N} $. Let $p_{1},p_{2},\ldots,p_{n-1}$ be $n-1$ elements of $\mathbb{K}$. Let $q_1, q_2, \ldots, q_{n-1}$ be $n-1$ further elements of $\mathbb{K}$. For any $\left( i,j\right) \in\left\{ 1,2,\ldots,n\right\} ^{2}$ satisfying $i\leq j$, we set \begin{align} a_{i,j}=\prod_{r=i}^{j-1}p_{r}=p_{i}p_{i+1}\cdots p_{j-1}. \end{align} (When $i=j$, this product is empty and thus evaluates to $1$.) For any $\left( i,j\right) \in\left\{ 1,2,\ldots,n\right\} ^{2}$ satisfying $i>j$, we set \begin{align} a_{i,j}=\prod_{r=j}^{i-1}q_{r}=q_{j}q_{j+1}\cdots q_{i-1}. \end{align} Let $A$ be the $n\times n$-matrix $\left( a_{i,j}\right) _{1\leq i\leq n,\ 1\leq j\leq n}$. We have \begin{equation} \det A=\prod_{r=1}^{n-1}\left( 1 - p_r q_r \right) . \end{equation} (We understand the right hand side to be an empty product if $n$ is $0$ or $1$.)

(Example: When $n=4$, the matrix $A$ in Theorem 1 equals $\begin{pmatrix} 1 & p_{1} & p_{1}p_{2} & p_{1}p_{2}p_{3}\\ q_{1} & 1 & p_{2} & p_{2}p_{3}\\ q_{1}q_{2} & q_{2} & 1 & p_{3}\\ q_{1}q_{2}q_{3} & q_{2}q_{3} & q_{3} & 1 \end{pmatrix}$.)

Proof of Theorem 1. First, a notation: If $N\in\mathbb{N}$ and $M\in\mathbb{N}$, if $B$ is an $N\times M$-matrix, and if $p\in\left\{ 1,2,\ldots,N\right\} $ and $q\in\left\{ 1,2,\ldots,M\right\} $, then we let $B_{\sim p,\sim q}$ denote the $\left( N-1\right) \times\left( M-1\right) $-matrix obtained from the matrix $B$ by crossing out the $p$-th row and the $q$-th column. It is well-known that if $K$ is a positive integer, and if the $K$-th row of a $K\times K$-matrix $D$ is zero apart from its last entry, then \begin{align} \det D=d\det\left( D_{\sim K,\sim K}\right) , \label{darij1.eq.1} \tag{1} \end{align} where $d$ is the last entry of the $K$-th row of $D$. (This can be viewed as a particular case of Laplace expansion with respect to the $K$-th row).

Now, we shall prove that \begin{align} \det\left( \left( a_{i,j}\right) _{1\leq i\leq k,\ 1\leq j\leq k}\right) =\prod\limits_{r=1}^{k-1}\left( 1 - p_r q_r \right) \label{darij1.eq.2} \tag{2} \end{align} for every $k\in\left\{ 0,1,\ldots,n\right\} $.

Indeed, we prove \eqref{darij1.eq.2} by induction over $k$: If $k=0$ or $k=1$, then \eqref{darij1.eq.2} holds by inspection. For the induction step, fix $K\in\left\{ 2,3,\ldots,n\right\} $, and assume (as the induction hypothesis) that \eqref{darij1.eq.2} holds for $k=K-1$. That is, we assume that \begin{align} \det\left( \left( a_{i,j}\right) _{1\leq i\leq K-1,\ 1\leq j\leq K-1}\right) =\prod\limits_{r=1}^{\left(K-1\right)-1} \left( 1 - p_r q_r \right) . \label{darij1.eq.3} \tag{3} \end{align} Now, if we subtract $q_{K-1}$ times the $\left( K-1\right) $-th row of the matrix $\left( a_{i,j}\right) _{1\leq i\leq K,\ 1\leq j\leq K}$ from its $K$-th row, then we obtain a new matrix $C$. It is easy to see that the $K$-th row of the new matrix $C$ is zero apart from its last entry, which is $1-p_{K-1}q_{K-1}$. Thus, \eqref{darij1.eq.1} (applied to $D=C$) yields \begin{align} \det C &=\left( 1-p_{K-1}q_{K-1}\right) \det\left( \underbrace{C_{\sim K,\sim K}}_{=\left( a_{i,j}\right) _{1\leq i\leq K-1,\ 1\leq j\leq K-1} }\right) \\ &=\left( 1-p_{K-1}q_{K-1}\right) \underbrace{\det\left( \left( a_{i,j} \right) _{1\leq i\leq K-1,\ 1\leq j\leq K-1}\right) }_{\substack{=\prod _{r=1}^{\left( K-1\right) -1}\left( 1-p_{r}q_r\right) \\\text{(by \eqref{darij1.eq.3})}}} \\ &=\left( 1-p_{K-1}q_{K-1}\right) \prod_{r=1}^{\left( K-1\right) -1}\left( 1-p_{r}q_r\right) \\ & =\prod_{r=1}^{K-1}\left( 1-p_{r}q_r\right) . \label{darij1.eq.4} \tag{4} \end{align}

But the matrix $C$ was obtained from the matrix $\left( a_{i,j}\right) _{1\leq i\leq K,\ 1\leq j\leq K}$ by subtracting a multiple of a row from another (see the definition of $C$); it is known that such a subtraction does not change the determinant of a matrix. Thus, $\det C=\det\left( \left( a_{i,j}\right) _{1\leq i\leq K,\ 1\leq j\leq K}\right) $. Compared with \eqref{darij1.eq.4}, this yields \begin{align} \det\left( \left( a_{i,j}\right) _{1\leq i\leq K,\ 1\leq j\leq K}\right) =\prod_{r=1}^{K-1}\left( 1-p_{r}q_r\right) , \end{align} and therefore \eqref{darij1.eq.2} holds for $k=K$. This completes the induction step, so that \eqref{darij1.eq.2} is proven.

Now, applying \eqref{darij1.eq.2} to $k=n$, we obtain $\det\left( \left( a_{i,j}\right) _{1\leq i\leq n,\ 1\leq j\leq n}\right) =\prod_{r=1}^{n-1}\left( 1-p_{r} q_r\right) $. Since $A=\left( a_{i,j}\right) _{1\leq i\leq n,\ 1\leq j\leq n}$, this rewrites as $\det A=\prod_{r=1}^{n-1}\left( 1-p_{r}q_r\right) $. Theorem 1 is proven. $\blacksquare$

Corollary 1b. Let $\mathbb{K}$ be a commutative ring, and let $n\in\mathbb{N} $. Let $p_{1},p_{2},\ldots,p_{n-1}$ be $n-1$ elements of $\mathbb{K}$. For any $\left( i,j\right) \in\left\{ 1,2,\ldots,n\right\} ^{2}$ satisfying $i\leq j$, we set $a_{i,j}=\prod_{r=i}^{j-1}p_{r}=p_{i}p_{i+1}\cdots p_{j-1}$. (When $i=j$, this product is empty and thus evaluates to $1$.) For any $\left( i,j\right) \in\left\{ 1,2,\ldots,n\right\} ^{2}$ satisfying $i>j$, we set $a_{i,j}=a_{j,i}$ (since $a_{j,i}$ has already been defined by the previous sentence). Let $A$ be the $n\times n$-matrix $\left( a_{i,j}\right) _{1\leq i\leq n,\ 1\leq j\leq n}$. We have \begin{equation} \det A=\prod_{r=1}^{n-1}\left( 1-p_{r}^{2}\right) . \end{equation} (We understand the right hand side to be an empty product if $n$ is $0$ or $1$.)

Proof of Corollary 1b. This follows from Theorem 1, applied to $q_i = p_i$. $\blacksquare$

Corollary 2. Let $q_{1},q_{2},\ldots,q_{n}$ be $n$ distinct positive reals. Let $w\in\left[ 0,1\right) $. Then, $\det\left( \left( w^{\left\vert q_{i}-q_{j}\right\vert }\right) _{1\leq i\leq n,\ 1\leq j\leq n}\right) >0$.

Proof of Corollary 2. If we permute the numbers $q_{1},q_{2},\ldots,q_{n}$ (among themselves), then the matrix $\left( w^{\left\vert q_{i} -q_{j}\right\vert }\right) _{1\leq i\leq n,\ 1\leq j\leq n}$ changes, but its determinant $\det\left( \left( w^{\left\vert q_{i}-q_{j}\right\vert }\right) _{1\leq i\leq n,\ 1\leq j\leq n}\right) $ remains unchanged (because the rows of the matrix get permuted, and so do the columns, but both times one and the same permutation is used, so the signs cancel). Hence, we can WLOG assume that $q_{1}<q_{2}<\cdots<q_{n}$ (we can achieve this by a permutation). Assume this, and set $p_{r}=w^{q_{r+1}-q_{r}}$ for every $r\in\left\{ 1,2,\ldots,n-1\right\} $. Notice that, for every $r\in\left\{ 1,2,\ldots,n-1\right\} $, we have $p_{r}=w^{q_{r+1}-q_{r}}\in\left[ 0,1\right) $ (since $w\in\left[ 0,1\right) $ and $q_{r+1}-\underbrace{q_{r} }_{<q_{r+1}}>0$).

Set $\mathbb{K}=\mathbb{R}$. Define the matrix $A$ as in Corollary 1b. Then, it is easy to see that $A=\left( w^{\left\vert q_{i}-q_{j}\right\vert }\right) _{1\leq i\leq n,\ 1\leq j\leq n}$ (since $q_1 < q_2 < \cdots < q_n$). But Corollary 1b yields \begin{align} \det A=\prod_{r=1}^{n-1}\underbrace{\left( 1-p_{r}^{2}\right) }_{\substack{>0\\\text{(since }p_{r}\in\left[ 0,1\right) \text{)}}}>0. \end{align} Since $A=\left( w^{\left\vert q_{i}-q_{j}\right\vert }\right) _{1\leq i\leq n,\ 1\leq j\leq n}$, this rewrites as $\det\left( \left( w^{\left\vert q_{i}-q_{j}\right\vert }\right) _{1\leq i\leq n,\ 1\leq j\leq n}\right) >0$. Corollary 2 is thus proven. $\blacksquare$

Corollary 3. Let $q_{1},q_{2},\ldots,q_{n}$ be $n$ distinct reals. Let $w\in\left[ 0,1\right) $. Then, the $n\times n$-matrix $\left( w^{\left\vert q_{i}-q_{j}\right\vert }\right) _{1\leq i\leq n,\ 1\leq j\leq n}$ is positive-definite.

Proof of Corollary 3. Sylvester's criterion for positive definiteness says that a symmetric matrix $\left( g_{i,j}\right) _{1\leq i\leq n,\ 1\leq j\leq n}\in\mathbb{R}^{n\times n}$ is positive-definite if and only if every $k\in\left\{ 1,2,\ldots,n\right\} $ satisfies $\det\left( \left( g_{i,j}\right) _{1\leq i\leq k,\ 1\leq j\leq k}\right) >0$. We can apply this to $g_{i,j}=w^{\left\vert q_{i}-q_{j}\right\vert }$ (since the matrix $\left( w^{\left\vert q_{i}-q_{j}\right\vert }\right) _{1\leq i\leq n,\ 1\leq j\leq n}$ is symmetric), and thus it remains to prove that every $k\in\left\{ 1,2,\ldots,n\right\} $ satisfies $\det\left( \left( w^{\left\vert q_{i}-q_{j}\right\vert }\right) _{1\leq i\leq k,\ 1\leq j\leq k}\right) >0$. But this follows from Corollary 2 (applied to $k$ instead of $n$). Thus, Corollary 3 is proven. $\blacksquare$

Theorem 4. Let $q_{1},q_{2},\ldots,q_{n}$ be $n$ distinct reals. Then, the $n\times n$-matrix $\left( \dfrac{1}{1+\left\vert q_{i} -q_{j}\right\vert }\right) _{1\leq i\leq n,\ 1\leq j\leq n}$ is positive-definite.

Proof of Theorem 4. The matrix in question is clearly symmetric. Thus, we only need to prove that $\sum\limits_{i=1}^{n}\sum\limits_{j=1} ^{n}\dfrac{1}{1+\left\vert q_{i}-q_{j}\right\vert }x_{i}x_{j}>0$ for every nonzero vector $\left( x_{1},x_{2},\ldots,x_{n}\right) \in\mathbb{R}^{n}$.

So fix a nonzero vector $\left( x_{1},x_{2},\ldots,x_{n}\right) \in\mathbb{R}^{n}$. For every $w\in\left[ 0,1\right) $, we have $\sum\limits _{i=1}^{n}\sum\limits_{j=1}^{n}w^{\left\vert q_{i}-q_{j}\right\vert }x_{i}x_{j}>0$ (by Corollary 3). Now, \begin{align} \sum_{i=1}^{n}\sum_{j=1}^{n}\underbrace{\dfrac{1}{1+\left\vert q_{i} -q_{j}\right\vert }}_{\substack{=\int_{0}^{1}w^{\left\vert q_{i} -q_{j}\right\vert }dw\\\text{(since }\left\vert q_{i}-q_{j}\right\vert \geq0\text{)}}}x_{i}x_{j} &=\sum_{i=1}^{n}\sum_{j=1}^{n}\left( \int_{0}^{1}w^{\left\vert q_{i} -q_{j}\right\vert }dw\right) x_{i}x_{j} \\ &=\int_{0}^{1}\underbrace{\left( \sum_{i=1}^{n}\sum_{j=1}^{n}w^{\left\vert q_{i}-q_{j}\right\vert }x_{i}x_{j}\right) }_{>0}dw>0 . \end{align} This is precisely what we wanted to show, and thus Theorem 4 is proven. $\blacksquare$

Corollary 5. Let $q_{1},q_{2},\ldots,q_{n}$ be $n$ reals. Then, the $n\times n$-matrix $\left( \dfrac{1}{1+\left\vert q_{i}-q_{j}\right\vert }\right) _{1\leq i\leq n,\ 1\leq j\leq n}$ is positive-semidefinite.

Proof of Corollary 5. Recall that a limit of a convergent sequence of positive-definite matrices is a positive-semidefinite matrix. Hence, Corollary 5 can be derived from Theorem 4 by continuity (because it is always possible to deform our $n$ reals $q_{1},q_{2} ,\ldots,q_{n}$ into $n$ distinct reals). An alternative argument would be using the semidefinite analogue of Sylvester's criterion for positive definiteness. (Be warned, however, that this analogue requires all principal minors to be $\geq0$, rather than only the leading principal minors!) Yet another alternative proof proceeds by "lumping" together the rows and the columns corresponding to equal numbers among our $q_1, q_2, \ldots, q_n$ and using this to write our matrix $\left( \dfrac{1}{1+\left\vert q_{i}-q_{j}\right\vert }\right) _{1\leq i\leq n,\ 1\leq j\leq n}$ in the form $BCB^T$, where $C = \left( \dfrac{1}{1+\left\vert r_{i}-r_{j}\right\vert }\right) _{1\leq i\leq k,\ 1\leq j\leq k}$ for some distinct reals $r_1, r_2, \ldots, r_k$ (namely, the list $\left(r_1, r_2, \ldots, r_k\right)$ should be simply the list $\left(q_1, q_2, \ldots, q_n\right)$ with all duplicates removed). $\blacksquare$


This answer is a suggestion as to how you may prove the result, but does not contain a proof. Instead, I'll show how a similar problem has been answered, and offer pointers to claims that would show your result:

Consider the matrix: $$A_{ij} = \frac{1}{1 + |a_i - a_j|^2}$$ Ultimately we will find that $\det(A)$ is always non-negative. The reason is that $A$ itself is a positive definite matrix. It is in the family of the Rational Quadratic Covariance Function: (aka Student T Kernel)

The rational quadratic covariance between two points separated by d distance units is given by $$C_{RQ}(d) = \Bigg(1+\frac{d^2}{2\alpha k^2}\Bigg)^{-\alpha}$$ In this case $\alpha = 1$, $k = \frac{\sqrt{2}}{2}$.


(From Wikipedia): The Covariance Function of a space process $Z(x)$ is defined by $$C(x, y) = \textrm{Cov}(Z(x), Z(y))$$ For locations $x_1, x_2, \ldots, x_N \in D$ the variance of every linear combination: $$X=\sum_{i=1}^N w_i Z(x_i)$$ can be computed as $$\operatorname{var}(X)=\sum_{i=1}^N \sum_{j=1}^N w_i C(x_i,x_j) w_j$$ A function is a valid covariance function if and only if this variance is non-negative for all possible choices of $N$ and weights $w_1, \ldots, w_N$. A function with this property is called positive definite.

For this particular problem, the $x_i$ in the statement are the $a_i$ in the matrix $A$ defined above, and $C(x_i, x_j) = C_{RQ}(|a_i - a_j|)$. It is well known (see below) that $C_{RQ}$ is positive definite as defined above and so for every linear combination: $$\sum_{i=1}^N \sum_{j=1}^N w_i C_{RQ}(|a_i - a_j|) w_j \geq 0$$ i.e. $A$ is positive definite.

Some references for the above (including the statement that $C_{RQ}$ is positive definite) can be seen at: Gaussian Processes for Machine Learning Rasmussen, Williams 2006 and Classes of Kernels for Machine Learning: A Statistics Perspective Genton 2001.

Now the kernel corresponding to the matrix you are interested in has been referred to as the Generalized T Student Kernel. $$C(x, y) = \frac{1}{1 + |x - y|^d}$$ At that blog, the following "proof" is offered:

The Generalized T-Student Kernel has been proven to be a Mercel Kernel, thus having a positive semi-definite Kernel matrix (Boughorbel, 2004).

The paper referenced is Conditionally Positive Definite Kernels For SVM Based Image Recognition - Boughorbel, Tarel, Boujemaa If this proves to be true, then the determinant of your matrix is indeed non-negative.


We shall prove by mathematical induction that $A$ is positive semidefinite. The base case is trivial. Note that the positive semidefiniteness is preserved if we permute $a_1,\cdots,a_n$ or add the same constant to every $a_i$. So, we may assume that $a_1\ge a_2\ge \cdots\ge a_n=1$. Then $A$ becomes $$ A=\left( \begin{array}{cccc} 1 & \dfrac1{1+a_1-a_2} & \cdots & \dfrac1{a_1} \\ \dfrac1{1+a_1-a_2} & 1 & \cdots & \dfrac1{a_2} \\ \vdots & \vdots & \ddots & \vdots \\ \dfrac1{a_1} & \dfrac1{a_2} & \cdots & 1 \end{array}\right) =\left(\frac1{1+a_{\min(i,j)}-a_{\max(i,j)}}\right)_{i,j=1,2,\ldots,n} $$ Since $a_{nn}=1>0$, $A$ is PSD iff the Schur complement $S$ of $a_{nn}$ in $A$ in PSD. The Schur complement $S$ is given by \begin{align*} s_{ij} &=\frac1{1+a_{\min(i,j)}-a_{\max(i,j)}}-\frac1{a_ia_j}\\ &=\frac1{1+a_{\min(i,j)}-a_{\max(i,j)}}-\frac1{a_{\min(i,j)}a_{\max(i,j)}}\\ &=\frac1{1+a_{\min(i,j)}-a_{\max(i,j)}}\cdot\frac{a_{\min(i,j)}+1}{a_{\min(i,j)}}\cdot\frac{a_{\max(i,j)}-1}{a_{\max(i,j)}}\\ &=\left(\frac1{1+a_{\min(i,j)}-a_{\max(i,j)}}\right) \left(1-\frac1{a_{\max(i,j)}}\right) \left(1+\frac1{a_{\min(i,j)}}\right). \end{align*} Therefore $S$ is a Hadamard product of three matrices. By induction assumption, the first matrix is PSD. It's easy to verify that the other two are PSD as well. Let $E_k$ denotes the matrix whose $k\times k$ leading principal submatrix contains only ones and all other entries are zero. Since $1-\frac1{a_i}$ is a decreasing sequence, the matrix $\left(1-\frac1{a_{\max(i,j)}}\right)_{i,j=1,2,\ldots,n-1}$ is a nonnegative weighted sum of $E_1,E_2,\ldots,E_{n-1}$. Similarly, as $1+\frac1{a_i}$ is increasing, $\left(1+\frac1{a_{\min(i,j)}}\right)_{i,j=1,2,\ldots,n-1}$ is also a nonnegative weighted sum of PSD matrices (but whose trailing principal submatrices, rather than the leading ones, contain ones). So, by Schur product theorem, $S$ is PSD and hence $A$ is PSD.