What's the sign of $\det\left(\sqrt{i^2+j^2}\right)_{1\le i,j\le n}$?

Recall that $e^{-|x|}=\int_0^\infty e^{-sx^2}\,d\mu(s)$ for some non-negative probability measure $\mu$ on $[0,+\infty)$ (Bernstein, totally monotone functions, etc.). Thus $e^{-t|x|}=\int_0^\infty e^{-st^2x^2}\,d\mu(s)$. for $t>0$. In particular, since $(e^{-st^2(i^2+j^2)})_{i,j}$ are rank one non-negative matrices, their mixture $(e^{-t\sqrt{i^2+j^2}})_{i,j}$ is a non-negative matrix for all $t>0$. Hence, considering the first two terms in the Taylor expansion as $t\to 0+$, we conclude that $-A$ is non-negative definite on the $n-1$-dimensional subspace $\sum_i x_i=0$, so the signature of $A$ is $1,n-1$. To finish, we just need to exclude the zero eigenvalue, i.e., to show that the columns are linearly independent, which I leave to someone else (i.e., up to this point we have shown that the determinant has the conjectured sign or is $0$).


What follows is just an explication of fedja's beautiful ideas, with the minor gap of non-zero-ness filled in. There seemed like enough interest to warrant sharing it. Along the way I decided to make it very explicit to make it more accessible, since it's so pretty.


Let $$A = \left(\sqrt{i^2+j^2}\right)_{1 \leq i, j \leq n}.$$

Theorem: $(-1)^{n-1} \det(A) > 0$.

Proof: Since $A$ is symmetric, it has an orthogonal basis of eigenvectors, and its determinant is the product of its eigenvalues. We'll show there are $n-1$ negative eigenvalues and $1$ positive eigenvalue, i.e. $A$ is non-degenerate with signature $(1, n-1)$.

Let $x_0 := (1, \ldots, 1)^\top$. Considering the quadratic form corresponding to $A$, $$x_0^\top A x_0 = \sum_{1 \leq i, j \leq n} \sqrt{i^2 + j^2} > 0.$$ Thus there must be at least $1$ positive eigenvalue $\lambda_+>0$. On the other hand, we have the following.

Claim: If $x \cdot x_0 = 0$ and $x \neq 0$, then $x^\top A x < 0$.

Assume the claim for the moment. If $A$ had another eigenvalue $\lambda \geq 0$, then the $2$-dimensional subspace $\mathrm{span}\{\lambda_0, \lambda\}$ would intersect the $(n-1)$-dimensional subspace $\{x : x \cdot x_0 = 0\}$ non-trivially, but at that point $y$ we would have both $y^\top A y \geq 0$ and $y^\top A y < 0$, a contradiction. So, the theorem follows from the claim. For readability, we break the argument into two pieces.

Subclaim: If $x \cdot x_0 = 0$ and $x \neq 0$, then $x^\top A x \leq 0$.

Proof of subclaim: We first introduce $$B(t) := \left(e^{-t\sqrt{i^2+j^2}}\right)_{1 \leq i,j \leq n}.$$ Intuitively, $A$ is the linear coefficient of the Taylor expansion of $B$ around $t=0$. More precisely, working coefficient-wise, $$\lim_{t \to 0} \frac{B_0 - B(t)}{t} = A$$ where $B_0 := B(0) = (1)_{1 \leq i, j \leq n}$ is the matrix of all $1$'s.

Since $x \cdot x_0 = 0$, we have $B_0 x = 0$. Thus $$\tag{1}\label{1}x^\top A x = \lim_{t \to 0} x^\top \frac{B_0 - B(t)}{t} x = -\lim_{t \to 0} \frac{x^\top B(t)\,x}{t}.$$

The key insight is to express the quadratic form $x^\top B(t) x$ in a manifestly positive way. For that, it is a fact1 that for all $z \geq 0$, $$e^{-\sqrt{z}} = \frac{1}{2\sqrt{\pi}} \int_0^\infty e^{-zs} s^{-3/2} \exp\left(-\frac{1}{4s}\right)\,ds.$$

Thus, for all $t \geq 0$, we have the following entry-wise equality: $$B(t) = \left(e^{-t\sqrt{i^2+j^2}}\right)_{1 \leq i,j \leq n} = \int_0^\infty \left(e^{-t^2(i^2+j^2)s}\right)_{1 \leq i,j \leq n} \, g(s)\,ds$$ where $$g(s) := \frac{1}{2\sqrt{\pi}} s^{-3/2} \exp\left(-\frac{1}{4s}\right) > 0\quad\text{for all }s > 0.$$ The integrand matrices can be decomposed as an outer product, namely $$\left(e^{-t^2(i^2+j^2)s}\right)_{1 \leq i,j \leq n} = \left(e^{-t^2 i^2 s} e^{-t^2 j^2 s}\right)_{1 \leq i,j \leq n} = u(s, t)u(s, t)^\top$$ where $$u(s, t) := (e^{-t^2 i^2 s})_{1 \leq i \leq n}$$ is a column vector. Thus, $$\begin{align*} x^\top B(t)\,x &= x^\top\left(\int_0^\infty u(s, t) u(s, t)^\top \, g(s)\,ds\right)x \\ &= \int_0^\infty (u(s, t)^\top x)^\top (u(s, t)^\top x)\,g(s)\,ds \\ &= \int_0^\infty (u(s, t) \cdot x)^2\,g(s)\,ds. \end{align*}$$ Hence \eqref{1} becomes $$\tag{2}\label{2}x^\top A\,x = -\lim_{t \to 0^+} \int_0^\infty \frac{(u(s, t) \cdot x)^2}{t} g(s)\,ds.$$

It is now clear that $x^\top A\,x \leq 0$ since $g(s) \geq 0$, finishing the subclaim.

Proof of claim: It will take a little more work to show that $x^\top A\,x < 0$ is strict. Let $t = 1/\alpha$ for $\alpha > 0$. Apply the substitution $s = \alpha^2 r$ to the integral in \eqref{2} to get $$\begin{align*} \int_0^\infty \frac{(u(s, t) \cdot x)^2}{t} g(s)\,ds &\geq \int_{\alpha^2/2}^{\alpha^2} \frac{(u(s, t) \cdot x)^2}{t} g(s)\,ds \\ &= \int_{1/2}^1 (u(\alpha^2 r, 1/\alpha) \cdot x)^2 \alpha g(\alpha^2 r)\,\alpha^2\,dr \\ &= \int_{1/2}^1 \alpha^3 (u(r, 1) \cdot x)^2 \frac{1}{2\sqrt{\pi}} \alpha^{-3} r^{-3/2} \exp\left(-\frac{1}{4\alpha^2 r}\right)\,dr \\ &= \frac{1}{2\sqrt{\pi}} \int_{1/2}^1 (u(r, 1) \cdot x)^2 r^{-3/2} \exp\left(-\frac{1}{4\alpha^2 r}\right)\,dr. \end{align*}$$

Thus \eqref{2} becomes $$\begin{align*} x^\top A\,x &\leq -\lim_{\alpha \to \infty} \frac{1}{2\sqrt{\pi}} \int_{1/2}^1 (u(r, 1) \cdot x)^2 r^{-3/2} \exp\left(-\frac{1}{4\alpha^2 r}\right)\,dr \\ &= -\frac{1}{2\sqrt{\pi}} \int_{1/2}^1 (u(r, 1) \cdot x)^2 r^{-3/2}\,dr. \end{align*}$$

Hence it suffices to show that $u(r, 1) \cdot x \neq 0$ for some $1/2 \leq r \leq 1$. Indeed, $\{u(r_j, 1)\}$ is a basis for any $r_1 < \cdots < r_n$. One way to see this is to note that the matrix is $\left(e^{-i^2 r_j}\right)_{1 \leq i, j \leq n} = \left(q_i^{r_j}\right)_{1 \leq i, j \leq n}$ where $q_i := e^{-i^2}$. Since $0 < q_n < \cdots < q_1$, this matrix is an (unsigned) exponential Vandermonde matrix in the terminology of Robbin--Salamon, and therefore has positive determinant. Hence $u(r, 1) \cdot x = 0$ for all $1/2 \leq r \leq 1$ implies $x=0$. contrary to our assumption. This completes the claim and proof. $\Box$


1As fedja points out, existence of an appropriate expression follows easily from Bernstein's theorem on completely monotone functions. As you'd expect, this explicit formula can be done by residue calculus.


The beautiful idea of @fedja: (also used by @Swanson:) is to show that $$\sum_{i,j} \sqrt{\lambda_i+\lambda_j}\, x_i x_j$$ is negative definite on the subspace $\sum x_i = 0$. Here is another way to check this. We have the integral formula

$$\sqrt{\lambda} = \frac{1}{2 \sqrt{\pi}} \int_{0}^{\infty}\frac{1 - e^{-\lambda s}}{s^{3/2}} ds$$ so we can write $$\sum \sqrt{\lambda_i+\lambda_j}\, x_i x_j = \frac{1}{2 \sqrt{\pi}}\int_{0}^{\infty} \frac{\sum_{i,j} (1- e^{-(\lambda_i+\lambda_j) s})x_i x_j} { s^{3/2}} d\,s $$ But now observe that since $\sum x_i = 0$ we also have $\sum_{i,j} x_i x_j=0$, so the numerator in the integral equals $$-\sum _{i,j} e^{-(\lambda_i+\lambda_j) s}x_i x_j= - (\sum_i e^{-\lambda_i s} x_i)^2 \le 0$$ so the integral is $\le 0$. But since the functions $(e^{-\lambda s})_{\lambda}$ are a linearly independent system, the integral is is fact $<0$ if not all the $x_i$ are $0$. Therefore, the quadratic form is negative definite on $\sum_i x_i = 0$

Now, using Cauchy interlacing theorem we conclude that the matrix $(\sqrt{\lambda_i+\lambda_j})$ has one positive eigenvalue and the rest negative. ( or: there is no $2$ subspace on which this form is $\ge 0$, since any such subspaces intersects $\sum x_i=0$ non-trivially).

This can be generalized, using other Bernstein functions, for instance $\lambda\mapsto \lambda^{\alpha}$ ($0<\alpha < 1$), since we have the integral representation $$\lambda^{\alpha} = \frac{\alpha}{\Gamma(1-\alpha)} \int_{0}^{\infty}\frac{1- e^{-\lambda s}}{s^{\alpha+1}} d s$$

One can take a look at the book of Schilling et al -- Bernstein functions theory and applications

Note: from numerical testing, it seems that matrices of form $((\lambda_i +\mu_j)^{\alpha})$ ($0<\alpha<1$, $\lambda_i$, $\mu_i$ positive and ordered in the same way), have one eigenvalue positive and the rest negative. It would imply that the matrix $(-(\lambda_i +\mu_j)^{\alpha})$ is totally negative.

$\bf{Added:}$ Sketch of proof for the integral representation:

For $\beta> 0$ we have $$\Gamma(\beta) = \int_{0}^{\infty} e^{-s} s^{\beta}\frac{d s}{s} = \int_{0}^{\infty}e^{-\lambda s} (\lambda s)^{\beta} \frac{d s}{s}= \lambda^{\beta} \int_{0}^{\infty}e^{-\lambda s} s^{\beta} \frac{d s}{s}$$ and so $$\lambda^{-\beta} = \frac{1}{\Gamma(\beta)} \int_0^{\infty} e^{-\lambda s} s^{\beta-1} d s$$ Now integrate wr to $\lambda$ from $0$ to $\lambda$ and get the formula for $1-\beta = \alpha>0$.

$\bf{Added:}$ Let's show that for $\alpha$ not an integer, $(\lambda_i)$, $(\mu_i)$ two $n$-uples of distinct positive numbers, the determinant $\det((\lambda_i + \mu_j)^{\alpha})_{ij}$ is $\ne 0$. Now, the determinant is $0$ is equivalent to its columns being linearly dependent, that is there exist $a_i$ not all zero such that the function $$\sum a_j (x+\mu_j)^{\alpha}$$ is zero at the points $x= \lambda_i$. Let us show by induction on $n$ that such a function $\sum_{j=1}^n a_j (x + \mu_j)^{\alpha}$ cannot have $n$ positive zeroes.

For $n=1$ it's clear. Assume the statement is true for $n-1$ and let's prove it for $n$.

Write $f(x) =\sum a_j (x + \mu_j)^{\alpha}$. We may assume (a shift in argument $x$) that the smallest $\mu_1 =0$. Now the positive zeroes of $f(x)$ are the positive zeroes of $$\frac{f(x)}{x^{\alpha}} = a_1 + \sum_{j>1} a_j (1 + \frac{\mu_j}{x})^{\alpha}=a_1 + \sum_{j>1} a_j \mu_j^{\alpha}\, (\frac{1}{\mu_j} + \frac{1}{x})^{\alpha} $$ So now we get another function $g(x) = a_1 + \sum b_j( x+ \xi_j)^{\alpha}$ that has $n$ positive roots, so its derivative will have $n-1$ positive roots. But the derivative is a sum of $n-1$ terms (involving the exponent $\alpha-1$). Now apply the induction hypothesis.

Note: with a bit more care, we show that the determinant $\det ((\lambda_i + \mu_j)^{\alpha})$ is $\ne 0$ whenever $(\lambda_i)$, $(\mu_i)$ are $n$-uples of distinct positive numbers and $\alpha\not = 0, 1, \ldots, n-2$. Its sign will depend only on $n$, $\alpha$ (perhaps the integral part of $\alpha$ ) ( and the ordering of the the $\lambda_i$, $\mu_i$).

An interesting problem is to determine this sign. We understand the case $0< \alpha<1$ (and $\alpha < 0$). Perhaps $\alpha \in (1,2)$ next? The determinant $\det((\lambda_i + \lambda_j)^{\alpha})$ should be negative if $n\ge 2$.