Nonsingularity of Euclidean distance matrix
Let $x_1, \dots, x_k \in \mathbb{R}^n$ be distinct points and let $A$ be the matrix defined by $A_{ij} = d(x_i, x_j)$, where $d$ is the Euclidean distance. Is $A$ always nonsingular?
I have a feeling this should be well known (or, at least a reference should exists), on the other hand, this fact fails for general metrics (take e.g. path metric on the cycle $C_4$)
edit: changed number of points from $n$ to general $k$
Solution 1:
I think it should be possible to show that your distance matrix is always nonsingular by showing that it is always a Euclidean distance matrix (in the usual sense of the term) for a non-degenerate set of points. I don't give a full proof but sketch some ideas that I think can be fleshed out into a proof.
Two relevant papers on Euclidean distance matrices are Discussion of a Set of Points in Terms of Their Mutual Distances by Young and Householder and Metric Spaces and Positive Definite Functions by Schoenberg. They show that an $n\times n$ matrix $A$ is a Euclidean distance matrix if and only if $x^\top Ax\le0$ for all $x$ with $e^\top x=0$ (where $e$ is the vector with $1$ in each component) and that the affine dimension of the points is $n$ if and only if the inequality is strict.
It follows that a Euclidean distance matrix can only be singular if the affine dimension of the points is less than $n$: If the affine dimension is $n$, there cannot be an eigenvalue $0$, since there is a positive eigenvalue (since $e^\top Ae\gt0$), and the span of these two eigenspaces would non-trivially intersect the space $e^\top x=0$, contradicting the negative definiteness of $A$ on that space.
To use all this for your case, one could try to show that a distance matrix in your sense is always a Euclidean distance matrix in the usual sense for points with affine dimension $n$. I think this could be done by continuously varying the exponent $\alpha$ in $A_{ij}=d(x_i,x_j)^\alpha$ from $1$ to $2$ and showing a) that there is always a direction in which the points can move such that $A$ remains their distance matrix with the changing exponent and b) that this movement necessarily causes them to have affine dimension $n$.
To get a feel for how this might work, consider a square: The movement would bend the square into a tetrahedron. The proof would need to account for the fact that this seems to hold only for $\alpha\lt2$; you can see from the example of three points in a line that they can be bent to accommodate $\alpha\lt2$ but not $\alpha\gt2$.
Solution 2:
This is true. Over in the comments of a related question, darij grinberg has just shown that $A$ is negative definite when restricted to the codimension one subspace where the coordinates sum to $0$. Let $\lambda_1 \geq \lambda_2 \geq \cdots \lambda_n$ be the eigenvalues of $A$, and let $\mu_1 \geq \mu_2 \geq \cdots \geq \mu_{n-1}$ be the eigenvalues of $A$ restricted to this subspace, so darij shows that $0 > \mu_1 \geq \cdots \geq \mu_{n-1}$. By the Eigenvalue Interlacing Theorem, we have $\mu_1 \geq \lambda_2 \geq \mu_2 \geq \lambda_3 \geq \cdots \geq \mu_{n-1} \geq \lambda_n$, so we know that $0 > \lambda_2$, ..., $\lambda_n$. So it remains to show that $\lambda_1$ is not zero.
If $\lambda_1$ were $0$, then $A$ would be negative semidefinite. But, letting $\vec{j}$ denote the all ones vector, we have $\vec{j}^T A \vec{j} > 0$, a contradiction.