Maximizing the sum $\sum\limits_{i=1}^nx_ix_{i+1}$ subject to $\sum\limits_{i=1}^nx_i=0$ and $\sum\limits_{i=1}^nx_i^2=1$

Is there an efficient way of solving the following problem?

Given $x_i\in \mathbb R$, and that $\sum\limits_{i=1}^nx_i=0$ and $\sum\limits_{i=1}^nx_i^2=1$. I want to maximize $\sum\limits_{i=1}^nx_ix_{i+1}$ where we take $x_{n+1}=x_1$.

I don't know if this is relevant/useful at all but maybe representing the systems as $\vec{x}^TI\,\,\vec{x}$ and $\vec{x}^T\{\delta_{i , \,\,i+1}\}\,\,\vec{x}$ might help?

Thanks.


To make the notation simpler, we will use $x_0=x_n$ and $x_{n+1}=x_1$.

Because $\sum\limits_{i=1}^nx_i=0$, the allowable variations $\{\delta x_i\}$ must satisfy $$ \sum_{i=1}^n\delta x_i=0\tag{1} $$ and because $\sum\limits_{i=1}^nx_i^2=1$, $$ \sum_{i=1}^nx_i\delta x_i=0\tag{2} $$ To maximize $\sum\limits_{i=1}^nx_ix_{i+1}$, any variations which satisfy $(1)$ and $(2)$ must also satisfy $$ \sum_{i=1}^n(x_{i-1}+x_{i+1})\delta x_{i}=0\tag{3} $$ If $(x_{i-1}+x_{i+1})$ is orthogonal to all vectors orthogonal to $1$ and $x_i$, then there must be $\mu$ and $\lambda$ so that $$ x_{i-1}+x_{i+1}=\mu+2\lambda x_i\tag{4} $$ Summing $(4)$ in $i$ and considering $\sum\limits_{i=1}^nx_i=0$, we get that $\mu=0$. Therefore, $$ x_{i+1}=2\lambda x_i-x_{i-1}\tag{5} $$ Squaring $(5)$, summing in $i$, and using $\sum\limits_{i=1}^nx_i^2=1$ yields $$ \lambda=\sum_{i=1}^nx_ix_{i-1}\tag{6} $$ Since the solution of $(5)$ must be $n$-periodic, the roots of $x^2-2\lambda x+1=0$ must both have $r^n=1$.

If we use $r=1$, then $\lambda=1$, but all $x_i$ would be the same. In this case, we cannot satisfy the given constraints.

Answer:

If we use $r_1=e^{\frac{i2\pi}{n}}$ and $r_2=e^{\frac{-i2\pi}{n}}$, then $\lambda=\cos\left(\frac{2\pi}{n}\right)$ and thus, for $n\ge3$, $$ x_i=\sqrt{\frac{2}{n}}\cos\left(\frac{2\pi}{n}i\right)\tag{7} $$ yields the maximum $$ \sum_{i=1}^nx_ix_{i-1}=\cos\left(\frac{2\pi}{n}\right)\tag{8} $$ Verification:

Using the formula for the sum of a geometric series yields $$ \sum_{k=0}^{n-1}e^{\frac{i2\pi}{n}k}=\frac{e^{\frac{i2\pi}{n}n}-1}{e^{\frac{i2\pi}{n}}-1}=0\tag{9a} $$ $$ \sum_{k=0}^{n-1}e^{\frac{i4\pi}{n}k}=\frac{e^{\frac{i4\pi}{n}n}-1}{e^{\frac{i4\pi}{n}}-1}=0\tag{9b} $$ Therefore, the real parts of $(9)$ say that for $n\ge3$ $$ \sum_{i=1}^n\cos\left(\frac{2\pi}{n}i\right)=0\tag{10a} $$ $$ \sum_{i=1}^n\cos\left(\frac{4\pi}{n}i\right)=0\tag{10b} $$ Thus, $\mathrm{(10a)}$ verifies that $(7)$ satisfies $\sum\limits_{i=1}^nx_i=0$. Furthermore, $\mathrm{(10b)}$ yields $$ \begin{align} \sum_{i=1}^n\cos^2\left(\frac{2\pi}{n}i\right) &=\sum_{i=1}^n\frac{\cos\left(\frac{4\pi}{n}i\right)+1}{2}\\ &=\frac{n}{2}\tag{11} \end{align} $$ which verfies that $(7)$ satisfies $\sum\limits_{i=1}^nx_i^2=1$.

Using the identity $\cos(x+y)+\cos(x-y)=2\cos(x)\cos(y)$ shows that $$ \begin{align} \sum_{i=1}^n\cos\left(\frac{2\pi}{n}i\right)\cos\left(\frac{2\pi}{n}(i-1)\right) &=\frac{1}{2}\sum_{i=1}^n\left(\cos\left(\frac{2\pi}{n}\right)+\cos\left(\frac{2\pi}{n}(2i-1)\right)\right)\\ &=\frac{n}{2}\cos\left(\frac{2\pi}{n}\right)+\frac{1}{2}\sum_{i=1}^{2n}\cos\left(\frac{2\pi}{n}i\right)-\frac{1}{2}\sum_{i=1}^n\cos\left(\frac{4\pi}{n}i\right)\\ &=\frac{n}{2}\cos\left(\frac{2\pi}{n}\right)\tag{12} \end{align} $$ which verifies that $(7)$ yields $\sum\limits_{i=1}^nx_ix_{i-1}=\cos\left(\frac{2\pi}{n}\right)$.


You can use Lagrange multipliers. There are probably good books that do a better job explaining this subject than this Wikipedia article.


EDIT: What was earlier a conjecture is now proved.


Following this approach, the maximal value is one half of the largest eigenvalue of a certain matrix $M$ below.

You have a function to optimize: $$F(\vec{x})=\sum x_ix_{i+1}$$ subject to two constraints: \begin{align}G(\vec{x})&=\sum x_i=0\\ H(\vec{x})&=\sum x_i^2=1 \end{align}

With such simple polynomial functions as these, the method of Lagrange multipliers states that if $\vec{x}$ is a potential extremal point, then for some $\lambda$ and $\mu$, \begin{align}\nabla F&=\lambda\nabla G+\mu\nabla H\\ M\vec{x} & = \lambda\vec{1}+2\mu\vec{x} \end{align} where \begin{align} M&=\ \begin{bmatrix}0&1&0&0&\cdots&0&1\\ 1&0&1&0&\cdots&0&0\\ 0&1&0&1&\cdots&0&0\\ 0&0&1&0&\cdots&0&0\\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots&\vdots\\ 0&0&0&0&\cdots&0&1\\ 1&0&0&0&\cdots&1&0\end{bmatrix}\\ \vec{1}&=\begin{bmatrix}1\\1\\1\\1\\\vdots\\1\\1\end{bmatrix} \end{align}

Summing the equation $M\vec{x} = \lambda\vec{1}+2\mu\vec{x}$ over all rows and using the first constraint shows than $\lambda=0$, and $\vec{x}$ must be an eigenvector of $M$ in the eigenspace $V_{2\mu}$. This gives more constraints: $J(\vec{x})=(M-2\mu I)\vec{x}=0$. Since $\nabla F$ is a linear combination of $\nabla J$ and $\nabla H$, $F$ must be constant subject to the constraints $H(\vec{x})=1$ and $J(\vec{x})=0$.

So $F$ takes constant values on the eigenspaces of $M$ intersected with the sphere given by $H(\vec{x})=1$. "All" that remains is to find one eigenvector from each eigenspace of $M$ (other than $V_2$ which is orthogonal to the constraint $G(\vec{x})=0$) and compute $F$. I do not know a way to handle this matrix $M$ for all values of $n$ simultaneously though.

If $\vec{x}$ is an eigenvector for $M$ with eigenvalue $2\mu$ satisfying $H(\vec{x})=1$, then \begin{align} F(\vec{x}) & =\vec{x}^t\left(\frac{1}{2}M\right)\vec{x}\\ &=\vec{x}^t\frac{1}{2}(2\mu\vec{x})\\ &=\mu\ \vec{x}^t\vec{x}\\ &=\mu \end{align}

So in summary, the only potential extremal points for $F$ happen at the intersections of the unit sphere with the various eigenspaces of $M$. In these intersections, $F$ has constant value $\mu$, which is one half of the eigenvalue of $M$ for that eigenspace. If you can find the eigenvalues of $M$, you have the answers to your question.