Which polynomials fix the unit circle?

Edit: now a complete solution, though the D case is rather "hairy". It's conceptually very simple, as we basically only develop the squares and solve the equations they yield, but one has to be careful when doing this. This part is much longer than the preceding three.

We will use some basic facts about polynomials:

  • Two polynomials are equal iff they have the same coefficients.
  • The monomial of highest degree of $(\alpha_0+\alpha_1x+\cdots+\alpha_nx^n)^2$ is $\alpha_n^2x^{2n}$.
  • If a polynomial of degree $n$ has more than $n$ roots, it's the null polynomial. Especially true if there are infinitely many roots.

Now, let's hunt the Snark...

A. A solution $P$ is either an odd polynomial or an even polynomial

We suppose we have a polynomial $P$ satisfaying $P(x)^2+P(y)^2=1$ whenever $x^2+y^2=1$. Notice that $-P$ is also a solution.

The first thing to notice, is that for any $x\in[-1,1]$, you can find an $y$ such that $x^2+y^2=1$. There are two solutions: $y=\pm\sqrt{1-x^2}$. For such an $(x,y)$, you have an equally valid $(-x,y)$.

This means that for any $x\in[-1,1]$, there is an $y$ such that

$$P(x)^2=P(-x)^2=1-P(y)^2$$

Hence, for any $x\in [-1,1]$,

$$P(x)^2-P(-x)^2=(P(x)+P(-x))(P(x)-P(-x))=0$$

You have a product that vanishes for infinitely many $x$, so (pigeonhole principle), there is at least one factor that vanishes for infinitely many $x$.

But both factors are polynomials, so one of them is null, and

$$P(x)=P(-x)$$

or

$$P(x)=-P(-x)$$

So $P$ is either even or odd. It can't be both, since it would be null, and the null polynomial is not a solution. That is, either $P$ has only coefficients of even degree, either it has only coefficients of odd degree.

B In the even case, $P$ must be a constant.

Let's suppose the $\deg P=2n\gt 0$, and write, with $y=\sqrt{1-x^2}$,

$$P(x)^2+P(y)^2=\left(\sum_{k=0}^na_{2k} x^{2k}\right)^2+\left(\sum_{k=0}^na_{2k} (1-x^2)^k\right)^2=1$$

And with our hypothesis on degree, we must have $a_{2n}\neq0$.

Now, let's find the coefficient of highest degree of the left polynomial.

Inside the first square we have highest monomial $a_ {2n}x^{2n}$, and inside the second one, we have after expanding, $(-1)^na_{2n}x^{2n}$.

After squaring and summing, the monomial of highest degree on the left is thus $2a_{2n}^2x^{4n}$. But, it must be null, since the polynomial on the right of the equality is a constant, and we have supposed $n>0$.

Contradiction! So if $P$ is even, its degree must be $0$.

Incidentally, a solution of degree $0$ satisfies $P(x)=a$, for all $x$, and

$$P(x)^2+P(y)^2=1=2a^2$$

Hence $P(x)=\frac{\sqrt2}{2}$ or $P(x)=-\frac{\sqrt2}{2}$

These are the only solutions if $P$ is even.

Notice that the same argument does not work for the odd case, because, when you write $P(x)^2+P(y)^2=1$ with $y=\sqrt{1-x^2}$, you get

$$P(x)^2+P(y)^2=\left(\sum_{k=0}^na_{2k+1} x^{2k+1}\right)^2+\left(\sum_{k=0}^na_{2k+1} \left(\sqrt{1-x^2}\right)^{2k+1}\right)^2$$ $$=\left(\sum_{k=0}^na_{2k+1} x^{2k+1}\right)^2+(1-x^2)\left(\sum_{k=0}^na_{2k+1} (1-x^2)^k\right)^2=1$$

And the coefficient of degree $4n+2$ vanishes.

C. There is an infinite sequence of solutions with odd $P$.

You have a general solution with $P(x)=T_{2n+1}(x)$, the Chebyshev polynomial of the first kind, for any $n$.

You have

$$T_{2n+1}(\cos \theta)=\cos (2n+1)\theta$$ $$T_{2n+1}(\sin \theta)=(-1)^n\sin (2n+1)\theta$$

For the cosine, it's a well known property (and sometimes the definition). For the sine it's not very difficult:

$$T_{2n+1}(\sin(\theta))=T_{2n+1}\left(\cos\left(\frac{\pi}2-\theta\right)\right)=\cos\left(\frac{(2n+1)\pi}2-(2n+1)\theta\right)$$ $$=\cos\left(\frac{(2n+1)\pi}2\right)\cos (2n+1)\theta+\sin\left(\frac{(2n+1)\pi}2\right)\sin(2n+1)\theta$$ $$=\sin\left(\frac{(2n+1)\pi}2\right)\sin(2n+1)\theta$$

And $\sin\left(\frac{(2n+1)\pi}2\right)=\sin\left(n\pi+\frac{\pi}2\right)=(-1)^n$, so

$$T_{2n+1}(\sin(\theta))=(-1)^n\sin(2n+1)\theta$$

With this $P$, you have obviously that for $x=\cos\theta$ and $y=\sin\theta$, which is always possible if $x^2+y^2=1$,

$$P(x)^2+P(y)^2=\cos^2 (2n+1)\theta+\sin^2 (2n+1)\theta=1$$

Again, the polynomials $-T_{2k+1}$ are also solutions.

D. There is no other solution for odd $P$

This one is a bit tricky.

We will need another fact about polynomials: any sequence $Q_n$ of polynomials of degree $\deg Q_n=n$, form a basis of the linear space of polynomials, $\Bbb R[X]$.

To see why, we just have to notice that then you can write $x^n$ as a linear combination of $(Q_0,Q_1,\cdots,Q_n)$, uniquely. To this end, annihilate first the monomial in $x^n$ with some $\alpha Q_n$, and continues with $x^n-\alpha Q_n$, and with subsequent differences.

It's true for Chebyshev polynomials of the first kind, and we have even a bit more: since $T_n$ is even or odd according to parity of $n$, an odd polynomial can be written uniquely as a linear combination of $\{T_{2n+1};n\in\Bbb N\}$, while an even polynomial can be written uniquely as a linear combination of $\{T_{2n};n\in\Bbb N\}$.

We will also need a similar statement about trigonometric polynomials. If two polynomials of the form $\alpha_0+\sum_{k=1}^n \alpha_k\cos k\theta + \beta_k\sin k\theta$ are equal, then all corresponding coefficients are equal. This stems from the fact that the $\cos kx, \sin kx$ are orthogonal w.r.t the inner product $(u|v)=\frac1\pi\int_{-\pi}^\pi u(\theta)v(\theta) \,\mathrm{d}\theta$.

Thus, if you have a solution $P$ of your problem $P(x)^2+P(y)^2=1$, which is an odd polynomial, then we can write

$$P(x)=\sum_{k=0}^n a_{2k+1}x^{2k+1}$$

And, in the Chebyshev basis, there are unique $\{b_{2k+1}\}$ such that

$$P(x)=\sum_{k=0}^n b_{2k+1}T_{2k+1}(x)$$

And we can assume $a_{2n+1}\neq0$, or equivalently $b_{2n+1}\neq0$.

With $x=\cos\theta$ and $y=\sin\theta$, we have then (see the preceding part C for $T_{2k+1}(\sin\theta)$):

$$P(x)=P(\cos\theta)=\sum_{k=0}^n b_{2k+1}T_{2k+1}(\cos \theta)=\sum_{k=0}^n b_{2k+1}\cos (2k+1)\theta$$

$$P(y)=P(\sin\theta)=\sum_{k=0}^n b_{2k+1}T_{2k+1}(\sin \theta)=\sum_{k=0}^n (-1)^k b_{2k+1}\sin (2k+1)\theta$$

Write that $P$ is a solution:

$$\left(\sum_{k=0}^n b_{2k+1}\cos (2k+1)\theta\right)^2+\left(\sum_{k=0}^n (-1)^k b_{2k+1}\sin (2k+1)\theta\right)^2=1$$

Now, we are going to develop this beast carefully.

We develop the square like this:

$$(\alpha_0+\alpha_2+\cdots+\alpha_n)^2=\sum_{k=0}^n \alpha_k^2+2\sum_{i\lt j} \alpha_i\alpha_j$$

Where the second sums runs on all $i,j$ such that $0\leq i\lt j\leq n$.

For our sum of two squares, this yields

$$\sum_{k=0}^n b_{2k+1}^2\left(\cos^2 (2k+1)\theta + \sin^2 (2k+1)\theta\right)+\\ 2\sum_{i\lt j} b_{2i+1}b_{2j+1} \cos (2i+1)\theta \cos (2j+1)\theta+\\ 2\sum_{i\lt j} (-1)^{i+j} b_{2i+1}b_{2j+1} \sin (2i+1)\theta \sin (2j+1)\theta=1$$

We want to "linearize" this, that is use trigonometric identities to transform the expression so that there is no product of trigonometric functions. Then we will apply what we know on trigonometric polynomials, to annihilate all coefficients, in some interesting order.

The first sum, let's call it $U$, is simply a constant,

$$U=\sum_{k=0}^n b_{2k+1}^2\left(\cos^2 (2k+1)\theta + \sin^2 (2k+1)\theta\right)=\sum_{k=0}^n b_{2k+1}^2$$

The second and the third sum may be simplified by remembering that

$$\cos(a+b)=\cos a\cos b- \sin a\sin b$$ $$\cos(a-b)=\cos a\cos b+ \sin a\sin b$$

So, taking corresponding terms, according to whether $(-1)^{i+j}$ is $+1$ or $-1$, you will simplify to respectively $b_{2i+1}b_{2j+1}\cos (2j-2i)\theta$ or $b_{2i+1}b_{2j+1}\cos (2i+2j+2)\theta$.

Hence, the second and third sum in our preceding expansion, simplify to $V+W$ with

$$V=\sum_{i\lt j, \ i+j\ \mathrm{even}} b_{2i+1}b_{2j+1}\cos (2j-2i)\theta$$ $$W=\sum_{i\lt j, \ i+j\ \mathrm{odd}} b_{2i+1}b_{2j+1}\cos (2i+2j+2)\theta$$

We must be careful, because among the factors $2j-2i$ and $2i+2j+2$ in the cosines, many overlap. But we have, with $x=\cos\theta, y=\sin\theta$:

$$P(x)^2+P(y^2)=U+2V+2W=1$$

Let's have a closer look at those cosines. From now on, we will suppose $n$ is even. The proof will be similar for odd $n$. We have always $0\leq i\lt j\leq n$, and:

  • $U$ is a constant, so there is no cosine at all
  • In $V$, they appear with factor $2j-2i$, and $i+j$ is even. So it can run from $4=2\cdot2-2\cdot0$ to $2n=2\cdot n-2\cdot 0$.
  • In $W$, they appear with factor $2i+2j+2$, and $i+j$ is odd. So it can range from $4=2\cdot1+2\cdot0+2$ to $4n=2\cdot n+2\cdot(n-1)+2$.

Since there is no constant in $V$ or $W$, it means $U=1$ and $V$ and $W$ must be zero. Moreover, we can see that the "highest cosine", that is $\cos 4n\theta$, appears only when $i=n-1$ and $j=n$, thus with only coefficient $b_{2n+1}b_{2n-1}$.

It's here we use our hypothesis that $b_{2n+1}\neq0$, so $b_{2n-1}=0$.

The latter developments are devoted to proving that all coefficients must vanish except $b_{2n+1}$. And we will have at each step a "highest remaining cosine" (that is, cosine with the highest integer in factor of $\theta$, we will call this factor the $\theta-factor$ for short), with in its coefficients a single remaining term, even though they originally overlap. That is, by annihilating the $b_{2k+1}$ in the right order, the problem is much simplified.

To give an idea, the order for $k$ will be (for even $n$):

$$n-1,n-3,n-5,\cdots,3,1,0,2,4,6,\cdots,n-2$$

We have already proved $b_{2n-1}=0$, that's for $k=n-1$. Notice also that we can consider only $W$, as long as our $\theta$-factor is greater than $2n$, since there is no factor beyond this one in $V$.

Let's represent our $\theta$-factors in a table, as function of indices $i$ and $j$, for $n=8$, in order to show how (and why) the following proof works.

$$\begin{array}{|c|c|c|c|c|c|c|c|c|c|} \hline j\backslash i & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\\ \hline 0 & * \\ \hline 1 & 4 & * \\ \hline 2 & * & 8 & * \\ \hline 3 & 8 & * & 12 & * \\ \hline 4 & * & 12 & * & 16 & * \\ \hline 5 & 12 & * & 16 & * & 20 & * \\ \hline 6 & * & 16 & * & 20 & * & 24 & * \\ \hline 7 & 16 & * & 20 & * & 24 & * & 28 & * \\ \hline 8 & * & 20 & * & 24 & * & 28 & * & 32 & * \\ \hline \end{array}$$

The table reads like this: for $i=2$, $j=3$, there is a $b_{2i+1}b_{2j+1}\cos k\theta$ term in $W$, with $k=12$ (the $\theta$-factor, from table). Also, elements in each growing diagonal belong to the same $k$. Hence, given that the overall coefficient before each cosine must vanish, a growing diagonal gives rise to an equation between the $b_{2i+1}$. For example, for $i+j=9$, you get $(b_{17}b_{3}+b_{15}b_{5}+b_{13}b_{7}+b_{11}b_{9})\cos 20\theta =0$. Since $i+j$ must be odd, all stars in the table correspond to zero coefficients, and they don't appear in $W$.

For a given $n$ like here, we can continue by hand: we have supposed $b_{2n+1}=b_{17} \neq 0$. Also we already saw that $b_{2n-1}=b_{15}=0$. It's really because $32$ is alone on its diagonal. On the next diagonal, there are two terms, $b_{17}b_{11}$ and $b_{15}b_{13}$, and the sum must be $0$. But $b_{15}=0$, so $b_{11}=0$. And we go on, proving successively that $b_{15}$, $b_{11}$, $b_{7}$, $b_{3}$, that is the $b_{2k+1}$ for $k=7,5,3,1$, are null. Then, we are left with only even $k$, and we will attack them with $V$, a bit later.

For now, we prove this approach actually always works, for any even $n$, by induction. We suppose $b_{2n+1}\neq 0$, as before, and also $b_{2(n-1)+1}=b_{2(n-3)+1}=\cdots=b_{2(n-2p+1)+1}=0$, and we want to prove $b_{2(n-2p-1)+1}=0$, working on the diagonal that contains $b_{2(n-2p-1)+1}b_{2n+1}=b_{2i_0+1}b_{2j_0+1}$, with $i_0=n-2p-1$ and $j_0=n$ (on the bottom line).

Consider all coefficients before $\cos (2i+2j+2)\theta=\cos 4(n-p)\theta$ (that is, the whole diagonal), they yield the equation:

$$\sum_{i+j=2n-2p-1} b_{2i+1}b_{2j+1}=\sum_{i=n-2p-1}^n b_{2i+1}b_{2(2n-2p-1-i)+1}=0$$

Where the sum is done on valid $i,j$ that satisfy also $0\leq i\lt j\leq n$.

We notice that $i$ and $2n-2p-1-i$ don't have the same parity (by construction, since $i+j$ must be odd), so one of them is odd, for any $i$. If it's $i$, appart from the first that corresponds to $b_{2(n-2p-1)}$, all other are greater, so $b_{2i+1}=0$ by hypothesis. And if it's $2n-2p-1-i$, it's also greater than $n-2p-1$, since $i\lt j\leq n$ implies $i\lt n$, so again $b_{2(2n-2p-1-i)+1}=0$, by hypothesis. Hence, appart for the term $b_{2(n-2p-1)+1}b_{2n+1}$, all other are zero. But since the sum must vanish, and $b_{2n+1}\neq0$, we must have $b_{2(n-2p-1)+1}=0$, and the induction step is proved.

We can continue the induction as long as $4(n-p)\gt2n$, otherwise there would be terms from $V$ in the equation, and we would not be able to conclude. That is, $p<\frac{n}2$, or until $p=\frac{n}2-1$, included.

Hence it is proved that $b_{2(n-2p-1)+1}=0$ for all $p$ such that $0\leq p\leq \frac{n}2-1$. And $n-2(\frac{n}2-1)-1=1$. Thus, simplifying a bit, we have

$$b_{2i+1}=0$$

For all odd $i$ such that $1 \leq i \leq n-1$, that is, all odd $i$.

Now, we will do the same on $V$.

Again, let's have a look at $\theta$-factors for $i,j$: the corresponding term in $V$ is $b_{2i+1}b_{2j+1} \cos 2(j-i)\theta$, and now $i+j$ must be even, and we have still $0\leq i \lt j \leq n$.

$$\begin{array}{|c|c|c|c|c|c|c|c|c|c|} \hline j\backslash i & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\\ \hline 0 & \\ \hline 1 & * & \\ \hline 2 & 4 & * & \\ \hline 3 & * & 4 & * & \\ \hline 4 & 8 & * & 4 & * & \\ \hline 5 & * & 8 & * & 4 & * & \\ \hline 6 & 12 & * & 8 & * & 4 & * & \\ \hline 7 & * & 12 & * & 8 & * & 4 & * & \\ \hline 8 & 16 & * & 12 & * & 8 & * & 4 & * & \\ \hline \end{array}$$

This time, $\theta$-factors are constant along decreasing diagonals, and we will start with the bottom-left corner, where we have $b_{2n+1}b_{2\cdot0+1}=0$ thus $b_1=0$. This works for any even $n$, because $2(n-0)$ is the maximum value that can be taken by $2(j-i)$, and it appears only for $j=n$ and $i=0$, thus there is always only one term $b_{2n+1}b_{2\cdot0+1}$ before $\cos 2n\theta$.

We continue with induction, supposing that $b_{2\cdot0+1}=\cdots=b_{2(2p-2)+1}=0$, and we want to prove that $b_{2(2p)+1}=0$. We work on the diagonal starting on the bottom line at $j_0=n$ and $i_0=2p$. The whole diagonal yields the equation

$$\sum_{i=0}^{2p} b_{2i+1}b_{2(n-2p-i)+1}=0$$

Considering $b_{2i+1}b_{2(n-2p-i)+1}$, and appart from $b_{2i_0+1}b_{2j_0+1}$, either both $i$ and $n-2p-i$ are odd, and the term is null for we have proved earlier that $b_{2i+1}=0$ in this case (when working with $W$). Either they are both even, and one of them is smaller than $i_0$, thus $b_{2i+1}=0$ or $b_{2(n-2p-i)+1}=0$. Again, there is only one term remaining in the sum, and it's $b_{2i_0+1}b_{2j_0+1}=b_{2(2p)+1)}b_{2n+1}=0$. Since we suppose $b_{2n+1}\neq0$, we must have $b_{2(2p+1)}=0$, and the induction step is proved.

We can follow the induction as long as $i_0 \lt j_0 \leq n$, that is $2p \lt n$. Hence we have proved that $b_{2i+1}=0$ for all even $i$ below $n$.

To sum up, we have proved that all $b_{2i+1}$ are null, except $b_{2n+1}$. We can now rewrite the equation $P(x)^2+P(y)^2=1$ with $x=\cos \theta$, $y=\sin\theta$ and

$$P(x)=\sum_{k=0}^n b_{2k+1}T_{2k+1}(x)=b_{2n+1}T_{2n+1}(x)$$

Or

$$P(x)=b_{2n+1}\cos (2n+1)\theta$$ $$P(y)=(-1)^nb_{2n+1}\sin(2n+1)\theta$$

Then

$$P(x)^2+P(y)^2=b_{2n+1}^2=1$$

And finally $b_{2n+1}=\pm1$.

So, we have proved that if $n$ is even and $\deg P=2n+1$, then $P=\pm T_{2n+1}$.

I'll leave the case when $n$ is odd as an "exercise", as the proof would be exactly similar, with a little modification in the order of elimination of coefficients $b_{2i+1}$. Above it was $n-1,n-3,\cdots,3,1,0,2,\cdots,n-2$, and if $n$ is odd, it will be in the order $n-1,n-3,\cdots,2,0,1,3,\cdots,n-2$.

So, the conclusion of the whole answer is:

If $P(x)^2+P(y)^2=1$ whenever $x^2+y^2=1$, then either $P=\pm \frac{\sqrt{2}}2$, either $P=\pm T_{2n+1}$ for some $n$.


Here is a "simple" solution.

Lemma. Consider a polynomial $Q \in\mathbb{C}[X]$ such that $|{Q(e^{it})}|=1$ for $t\in\mathbb{R}$. Then $Q(X)=\lambda X^n$ for some nonnegative integer $n$, and some complex $\lambda$ with $|\lambda|=1$.

Proof. Clearly $Q$ is not zero, so, let $d=\deg Q$, and suppose that $$ Q(X)=a_0+a_1X+\cdots+a_d X^d. $$ Also, let $m=\mathop{\rm val}(Q)=\min\{k\leq d: a_k\ne 0\}$.

Suppose that $m<d$. The coefficient of $e^{i(d-m)t} $ in the expansion of $|Q(e^{it})|^2$ is $a_d\overline{a_m}$, so $$ a_d\overline{a_m}=\frac{1}{2\pi}\int_0^{2\pi} |{Q(e^{it})}|^2e^{i(m-d)t}\,dt= \frac{1}{2\pi}\int_0^{2\pi} e^{i(m-d)t}\,dt=0 $$ which is absurd since $a_d\ne 0$ and $a_m\ne 0$. It follows that $d=m$ so we can take $n=d$, $\lambda=a_d$, and the lemma is proved. $\qquad\square$

$\qquad$ Consider a polynomial $P(X)\in\mathbb{R}[X]$ that satisfies the proposed condition. That is $$ \forall\, t\in \mathbb{R},\quad \left|P(\cos t)+iP\left(\cos\left(t-\frac{\pi}{2}\right)\right)\right|=1$$ Now, let $d=\deg P$, then there are $(b_0,b_1,\ldots,b_d)\in\mathbb{R}^{d+1}$, such that $$ P(X)=\sum_{k=0}^d b_kT_k(X) $$ where $T_k$ is the Chebyshev polynomial of the first kind and degree $k$. (because Chebyshev's polynomials of the first kind constitute a basis for $\mathbb{R}[X]$.)

Now, if $q(t)=P(\cos t)+iP\left(\cos\left(t-\frac{\pi}{2}\right)\right)$ then \begin{align*} q(t)&=\sum_{k=0}^db_k(\cos(kt)+i\cos(kt-k\pi/2))\\ &=(1+i)b_0+\frac{1}{2}\sum_{k=1}^db_k\left(e^{ikt}+e^{-ikt}\right) +\frac{i}{2}\sum_{k=1}^db_k\left(i^{-k}e^{ikt}+i^ke^{-ikt}\right)\\ &=(1+i)b_0+\frac{1}{2}\sum_{k=1}^db_k(1+i^{1-k}) e^{ikt} +\frac{1}{2}\sum_{k=1}^db_k(1+i^{1+k}) e^{-ikt} \\ &=e^{-idt}Q(e^{it}) \end{align*} where $$ Q(X)=\frac{1}{2}\sum_{k=1}^db_k(1+i^{1+k}) X^{d-k} +(1+i)b_0X^d+\frac{1}{2}\sum_{k=1}^db_k(1+i^{1-k}) X^{d+k} $$ By assumption we have $|{Q(e^{it})}|^2=1$ and, according to the Lemma, $Q$ must be a monomial. So, we have one of the following cases:

  • $|{(1+i)b_0}|=1$ and $b_1=\ldots=b_n=0$. This corresponds to $P(X)=\pm\frac{1}{\sqrt{2}}T_0(X)$. (because $b_0$ is real).
  • For some $k\in\{1,\ldots,d\}$ we have $|{b_k(1+i^{1+k})}|=2$, $|{b_k(1+i^{1-k})}|=0$, and the other $b_j$'s are $0$. Since $b_k\ne0$ the second condition implies that $k=3\pmod{4}$ and replacing in the first we get $b_k=\pm1$. So in this case $P(X)=\pm T_k(X)$ for some $k=3\pmod{4}$.

  • For some $k\in\{1,\ldots,d\}$ we have $|{b_k(1+i^{1-k})}|=2$, $|{b_k(1+i^{1+k})}|=0$, and the other $b_j$'s are $0$. Again, the second condition implies that $k=1\pmod{4}$ and replacing in the first we get $b_k=\pm1$. So in this case $P(X)=\pm T_k(X)$ for some $k=1\pmod{4}$.

Finally, we have proved that either $P(X)=\pm\frac{1}{\sqrt{2}}$ or $P(X)=\pm T_{k}$ for some odd integer $k$. The converse is trivially true since the converse of the Lemma is trivial. Done.


Using some complex analysis it can be done in a shorter way.

Suppose that $P$ is such a nonconstant real polynomial, and consider the complex rational function $$ Q(z)=P\left(\frac{z+\frac1z}2\right) + iP\left(\frac{z-\frac1z}{2i}\right). $$ We can see that $Q$ may have a pole at $0$ with order at most $\deg P$ and another pole at $\infty$ and nowhere else.

Since $Q$ maps the unit circle to itself, it must be a Blaschke-polynomial. But all the Blaschke factors must have their poles at $0$ or $\infty$; therefore, $Q$ is a multiple of a power of $z$, with a unit coefficient: $Q(z)=e^{ia} z^k$ with some integer $k$ and real $a$.

Substituting $z=e^{it}$ we get $$ P(\cos t) +i P(\sin t) = \cos(kt+a) +i \sin(kt+a), $$ so $P(\cos t)= \cos(kt+a)$ and $P(\sin t)=\sin(kt+a)$.

The function $P(\cos t)$ is even, so $a$ is a multiple of $\pi$. Hence, $P(\cos t)= \pm\cos(kt)$ and $P(\sin t)=\pm\sin(kt)$. The first equation shows that $P(x)= \pm T_{|k|}$, so $\pm P$ is a Chebyshev-polynomial. The second equation shows that $k$ must be odd.


If I am not wrong, I have a (complicated) solution.

A) Put $x(t)=P(\cos(t))$, $y(t)=P(\sin(t))$. We have that $x,y$ are ${\mathcal C}^{\infty}$, and as $(x(t))^2+(y(t))^2=1$ for all $t$, there exists a fonction $f$, ${\mathcal C}^{\infty}$, such that $x(t)=\cos(f(t))$ and $y(t)=\sin(f(t))$. We get $\sin(t)P^{\prime}(\cos(t))=f^{\prime}(t)\sin(f(t))$ and $\cos(t)P^{\prime}(\sin(t))=f^{\prime}(t)\cos(f(t))$. Using $\sin(f(t))=P(\sin(t))$ and $\cos(f(t))=P(\cos(t))$, and multiplying by $P(\sin(t))$ and $P( \cos(t))$, we get $$f^{\prime}(t)=\sin(t)P(\sin(t))P^{\prime}(\cos(t))+\cos(t)P(\cos(t))P^{\prime}(\sin(t))=A(t)$$

B) Now we have for $A(t)$ an expression (with finite number of terms):

$$A(t)=a_0+\sum_{k\not =0}(a_k\cos(kt)+b_k\sin(kt))$$ for some real constants $a_0, a_k, b_k.$ It is clear that $\int_0^{2\pi}A(t)dt=f(2\pi)-f(0)=2\pi a_0$. But we have $\displaystyle P(1)=\cos((f(0))=\cos(f(2\pi))$ and $P(0)=\sin(f(0))=\sin(f(2\pi))$. Thus $f(2\pi)-f(0)\in 2\pi\mathbb{Z}$, and hence $a_0\in \mathbb{Z}$.

We get that $$f(t)=a_0t+c+\sum_{k\not =0}(\alpha_k\cos(kt)+\beta_k\sin(kt))=a_0t+B(t)$$ for some new constants $c, \alpha_k, \beta_k\in \mathbb{R}$

We have $$P(\cos(t))+iP(\sin(t))=\exp(ia_0t+iB(t))$$

C) Now we have $\displaystyle P(\cos(t))+iP(\sin(t)))\exp(-ia_0t)=\exp(iB(t))$. But as $a_0\in \mathbb{Z}$, $P(\cos(t))+iP(\sin(t)))\exp(-ia_0t)=D(\exp(it),\exp(-it))$ where $D$ is in $\mathbb{C}[x,y]$, and also $iB(t)=E(\exp(it), \exp(-it))$, for $E\in \mathbb{C}[x,y]$. Now the two functions $\displaystyle D(z,1/z)$ and $\displaystyle \exp(E(z,1/z))$ are analytic in $U=\mathbb{C}-\{0\}$, and they are equal on the unit circle. Hence they are equal on $U$.

Write now $\displaystyle D(z,1/z)=\frac{G(z)}{z^M}$ with $M\in \mathbb{Z}$, and $G$ a polynomial in $z$ with $G(0)\not =0$. Suppose that $G$ is not constant. Then there exists $u\in \mathbb{C}$, not $0$, such that $G(u)=0$. But then we get $\exp(E(u, 1/u))=0$, a contradiction. Hence $G$ is a constant $c$.

D) We have proven that $\displaystyle D(z,1/z)=\frac{c}{z^M}$, and replacing $z$ by $\exp(it)$, we have:

$$P(\cos(t))+iP(\sin(t))=c\exp(iNt)$$, with $N\in \mathbb{Z}$. The constant $c$ is clearly of modulus $1$, hence $c=\exp(id)$, $d\in \mathbb{R}$. We have proven that $P(\cos(t))=\cos(Nt+d)$ and $P(\sin(t))=\sin(Nt+d)$, with $N\in \mathbb{Z}$.

Now the answer by Jean-Claude Arbaut finish the job.


Not (yet) an answer, but perhaps a strategy that could streamline @Jean-Claude's argument.


Parameterize $(x,y)$ by $(\cos\theta,\sin\theta)$, and then invoke complex exponential relations: $$x \to \cos\theta \to \frac12\left( e^{i\theta}+ e^{-i\theta}\right) \qquad y \to \sin\theta \to \frac{1}{2i} \left( e^{i\theta}-e^{-i\theta}\right)$$ These allow us to write the target relation as a "polynomial" (with negative exponents) in $e^{i\theta}$: $$Q(e^{i\theta}) := P(x)^2 + P(y)^2 - 1$$ Note that $Q$ is identically zero for all $\theta$. In particular, taking our polynomial $P$ to have degree $n$, and writing

$$P(z) := \sum_{k=0}^{n} a_n z^n$$

we have that $Q$ vanishes at the "$n$-th roots of unity", where $\theta_k := 2\pi k/n$ and $k=0$, $1$, $\dots$, $n-1$.

The relations $Q(\;\exp(i\theta_k)\;) = 0$ form a system of $n$ non-linear equations in $n+1$ unknowns, $a_k$. That's just underdetermined. However, if we take as given (based on other answers) that $P$ must be an odd polynomial ---with $n = 2m-1$ and $a_{\text{even}} = 0$--- then the reduced system $$Q(\;\exp(i\theta_{\text{odd}})\;) = 0$$ has $m$ equations in $m$ unknowns, $a_{\text{odd}}$. In theory, this system is solvable; by @Jean-ClaudeArbaut's argument, the solution is unique (up to sign, since both $P$ and $-P$ satisfy the target relation), giving coefficients of Chebyshev polynomials of the first kind. Analysis of the system may give a more-direct proof of this fact, although I haven't had much luck so far.


Example. $n = 3$, so that $P(z) = a_1 z + a_3 z^3$ (with $a_3 \neq 0$).

Define $\omega := \exp(2i\pi/3)$, and our system is $$\begin{align} Q(\omega^1) = 0 &\quad\to\quad a_3 \left( 4 a_1 + 3 a_3 \right)\left(1+\omega^2\right) - \omega \left( 16 - 16 a_1^2 - 24 a_1 a_3 - 10 a_3^2 \right) = 0 \\ Q(\omega^3) = Q(1) = 0 &\quad\to\quad (a_1+a_3)^2 = 1 \end{align}$$

The latter equation gives $a_1 = \pm 1 - a_3$; substitution into the the first equation gives $$( 1 - \omega )^2 \; a_3\;(a_3\mp 4) = 0 \qquad\to\qquad a_3 = \pm 4\quad\text{(since $a_3 \neq 0$)}$$ Consequently, $$(a_1, a_3)\;\in\;\left\{\;(-3,4),\;(3,-4)\;\right\} \qquad\to\qquad P = \pm T_3$$ where $T_3$ is the Chebyshev polynomial.


With $n=5$, the equations are already too complicated for me to want to write down. Nevertheless, we can use, say, the method of resultants to eliminate the "lesser" $a_k$s, to get a unique(ish) $a_5 = \pm 16$, and so forth.


For all (odd) $n$, our system of equations always contains the simple relation $$Q(\omega^n) = Q(1) = 0 \quad\to\quad a_1+a_3+\cdots+a_n = \pm 1$$ where $\omega = \exp(2\pi i/n)$ is the corresponding principal root of unity.

Moreover, the remaining equations use identical "coefficients" on the powers of $\omega$, just permuted. For instance, with $n = 5$, if we have $$Q(\omega^1) = 0 \qquad\to\qquad b_0 (1+\omega) + b_1 ( \omega^2 + \omega^4 ) + b_2 \omega^3 = 0$$ for appropriate $b_k$s, then $$Q(\omega^3) = 0 \qquad\to\qquad b_0 ( 1 + \omega^3 ) + b_1 ( \omega + \omega^2 ) + b_2 \omega^4 = 0$$ And, of course, there's a good deal of structure in the $b_k$s, which arise from linear combinations of binomial expansions of powers of $(\omega+\omega^{-1})$ and $(\omega - \omega^{-1})$.

Marshaling these facts in just the right way could possibly make the Chebyshev connection more immediate.