Unexpectedly simple patterns for the determinants of some matrices
Edit: "Spoiler"
Since it's a pretty wordy question, here's a quick spoiler... Why is the following true?
$$\det \begin{pmatrix} 0 & 1 & 2\\ 1 & 0 & 1 \\ 2 & 1 & 0 \end{pmatrix} =\det \begin{pmatrix} 0 & 1 & 2 & 0 & 1 & 2\\ 1 & 0 & 1 & 2 & 0 & 1\\ 2 & 1 & 0 & 1 & 2 & 0 \\ 0 & 2 & 1 & 0 & 1 & 2 \\ 1 & 0 & 2 & 1 & 0 & 1 \\ 2 & 1 & 0 & 2 & 1 & 0\end{pmatrix} = \det \begin{pmatrix} 0 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 2 \\ 1 & 0 & 1 & 2 & 0 & 1 & 2 & 0 & 1\\ 2 & 1 & 0 & 1 & 2 & 0 & 1 & 2 & 0 \\ 0 & 2 & 1 & 0 & 1 & 2 & 0 & 1 & 2\\ 1 & 0 & 2 & 1 & 0 & 1 & 2 & 0 & 1\\ 2 & 1 & 0 & 2 & 1 & 0 & 1 & 2 & 0 \\ 0& 2 & 1 & 0 & 2 & 1 & 0 & 1 & 2 \\ 1 & 0 & 2 & 1 & 0 & 2 & 1 & 0 & 1 \\ 2 & 1 & 0 & 2 & 1 & 0 & 2 & 1 & 0\end{pmatrix} = \dots $$
Consider the matrix $$A=\begin{pmatrix} 0 & 1 & 2\\ 1 & 0 & 1 \\ 2 & 1 & 0 \end{pmatrix}\,.$$ It can be easily evaluated that $\det A = 4$.
More in general it's easy to show (by direct calculation) that given $x\in\mathbb{R}$ and defining $$A(x) = \begin{pmatrix} x-1 & x & x+1 \\ x & x-1 & x \\ x+1 & x & x-1\end{pmatrix}$$ then $\det A(x) = 4x$.
The interesting fact is that these matrices can be "expanded" in a way such that the determinant is invariant. Additionally, for a larger class of matrices there seems to be some "simple" regular patterns concerning the determinant.
Introducing some notation...
First, I need to introduce some notation. Let $\mathbf{c} = \{c_1,c_2\dots c_n\}$. I'll denote $T(\mathbf{c})$ the $n\times n$ symmetric Toeplix matrix whose principal and upper diagonals are given by the coefficients $c_1\dots c_n$. I mean something like $$T(\{c_1,c_2,c_3,c_4\}) = \begin{pmatrix} c_1 & c_2 & c_3 & c_4\\c_2 & c_1 & c_2 & c_3 \\ c_3 & c_2 & c_1 & c_2 \\ c_4 & c_3 & c_2 & c_1 \end{pmatrix}\,.$$
If we call $\mathbf{v}(x) = \{x-1,x,x+1\}$, then $A(x) = T(\mathbf{v}(x))$.
Finally, given a $n$-dimensional vector $\mathbf{c} = \{c_1\dots c_n\}$, I'll call $\mathbf{c}^k$ the $(k\cdot n)$-dimensional vector obtained joining together $k$ copies of $\mathbf{c}$. For example $$\{c_1,c_2,c_3,c_4\}^3 = \{c_1,c_2,c_3,c_4,c_1,c_2,c_3,c_4,c_1,c_2,c_3,c_4\}\,.$$
The main question
I've stated at the beginning that $\det A(x) = 4x$. With the above notation, $\det T(\mathbf{v}(x)) = 4x$. Actually it seems to be true (at least for what I've tried with Mathematica) that for all positive integer $k$ $$\det T(\mathbf{v}^k(x)) = 4x\,.$$ I guess this result can be proven by induction on $k$, but it seems to be a bit painful. I would expect some simple and clean proof for what seems to be such a neat result.
Any ideas about what's going on and why the determinants are so simple?
Going a bit further...
Having noticed that things were so simple for $\mathbf{v}(x)=\{x-1,x,x+1\}$, the first thing I've tried is to slightly change $\mathbf{v}$. Let's now consider $T(\{x-2,x-1,x,x+1,x+2\}^k)$. Unfortunatly in this case things get much more complicated. For $k=1$ the determinant is $16 x$. But then for $k=2$ it's $113288 x$, for $k=3$ $65157184 x$ and so on. Things are clearly much messier here.
But... Let's define $\mathbf{w}(x) = \{x+2,x-1,x,x+1,x-2\}$. Then the sequence of determinants seems to be very regular.
\begin{align} &\det T(\mathbf{w}(x)) = 16 x\\ &\det T(\mathbf{w}^2(x)) = -8 x\\ &\det T(\mathbf{w}^3(x)) = 0\\ &\det T(\mathbf{w}^4(x)) = -8 x\\ &\det T(\mathbf{w}^5(x)) = 16 x\\ &\det T(\mathbf{w}^6(x)) = -8 x\\ &\det T(\mathbf{w}^7(x)) = 0\\ &\det T(\mathbf{w}^8(x)) = -8 x \end{align} and so on. So there is a clear pattern in the dependence on $k$: $$\{16, -8, 0, -8, 16, -8, 0, -8, 16, -8, 0, -8, 16, -8, 0, -8, 16, -8, 0, -8,\dots\}\,.$$
Then we may look at $T(\{x-3,x+2,x-1,x,x+1,x-2,x+3\})$ and again there is a pattern: $$\{64, 12, 4, 0, 4, 12, 64, 12, 4, 0, 4, 12, 64, 12, 4, 0, 4, 12, 64, \dots\}\,.$$
And again for $T(\{x+4,x-3,x+2,x-1,x,x+1,x-2,x+3,x-4\})$ a new pattern: $$\{256, -16, 0, -16, 0, -16, 0, -16, 256, -16, 0, -16, 0, -16, 0, -16, 256, -16, 0, -16,\dots\}\,.$$
I would bet in the existence of a simple explanation for these patterns, but as for now I've really no clue. Any ideas?
Solution 1:
I will focus on $\mathbf v$, but the explanation holds for $\mathbf w$ as well. Note that we can write $$ T(\mathbf v^k(x)) = xJ + T(\mathbf v^k(0)), $$ where $J$ is the matrix of all $1$s. That is, $J = \mathbf e \mathbf e^T$, where $\mathbf e = (1,\dots,1)^T$. Note that in all cases you consider, $T_0$ has a row-sum of zero and therefore fails to be invertible. Now, with the matrix determinant lemma, we find that $$ \det[T(\mathbf v^k(x))] = \det(T_0) + (\mathbf e^T\operatorname{adj}(T_0) \mathbf e) \cdot x = (\mathbf e^T\operatorname{adj}(T_0) \mathbf e) \cdot x. $$ In other words, it will always be equal to some constant multiplied by $x$.
In fact, we can say a bit more: in the case where $\operatorname{adj}(T_0) \neq 0$, $T_0$ must be a symmetric matrix whose kernel is spanned by $\mathbf e$. It follows that we can write $$ \operatorname{adj}(T_0) = \alpha \frac{\mathbf e\mathbf e^T}{\mathbf e^T\mathbf e} = \frac{\alpha}{kn} \mathbf e\mathbf e^T, $$ where $\alpha$ is the product of the non-zero eigenvalues of $T_0$. For a direct computation, we see that $\alpha/(kn)$ is the bottom-right entry of the adjugate. By the cofactor formula for the adjugate, this is the determinant of the symmetric, Toeplitz matrix attained by deleting the last row and column of $T_0$.
Once that is established, we note that $$ (\mathbf e^T\operatorname{adj}(T_0) \mathbf e) = \frac{\alpha}{kn} (\mathbf e^T\mathbf e \mathbf e^T \mathbf e) = \alpha kn, $$ So that our overall formula becomes $T(\mathbf v^k(x)) = (\alpha kn)\cdot x$.
For any vector $\mathbf v = (v_1,\dots,v_n)$, denote the truncated vector $[\mathbf v] = (v_1,\dots,v_{n-1})$. With the above established, we have reduced your observations of regularity to the the computation of the determinants of $\det T([\mathbf v^k(0)])$ and $\det T([\mathbf w^k(0)])$.