Property of a polynomial with no positive real roots

Another proof, due to Polya (1928):

Suppose that $f(x) = \sum_{j=0}^d f_j x^j$ is positive on $[0, \infty)$. We claim that, for $N$ sufficiently large, $(1+x)^N f(x)$ has positive coefficients.

Proof: Suppose to the contrary that there is an infinite sequence of pairs $(k_i, N_i)$ such that the coefficient of $x^{k_i}$ in $(1+x)^{N_i} f(x)$ is nonpositive, with $N_i \to \infty$. Passing to a subsequence, we may assume that $\lim_{i \to \infty} k_i/N_i$ exists; call this limit $\alpha$. It will be convenient to assume $\alpha \neq 1$; if $\alpha=1$ then replace $f$ by its reversal $\sum_{j=0}^d f_{d-j} x^j$, which will replace $k_i$ by $N_i-k_i$ and hence $\alpha$ by $1-\alpha$.

Now, the coefficient of $x^{k_i}$ in $(1+x)^{N_i} f(x)$ is $$\sum_{j=0}^d f_j \binom{N_i}{k_i-j} = \binom{N_i}{k_i} \sum_{j=0}^d f_j \frac{k_i}{(N_i-k_i+1)} \frac{(k_i-1)}{(N_i-k_i+2)} \cdots \frac{(k_i-j)}{(N_i-k_i+j+1)} .$$ So the sum on the right hand side is $\leq 0$ for all $i$.

Sending $i \to \infty$, the right hand sum approaches $\sum_{j=0}^d f_j \left( \frac{\alpha}{1-\alpha} \right)^j = f\left( \frac{\alpha}{1-\alpha} \right) >0$. (We wanted $\alpha \neq 1$ so $\tfrac{\alpha}{1-\alpha}$ is well defined.) So we have a sequence of nonpositive numbers aproaching a positive limit, a contradiction.


Call $S$ the set of polynomial satisfying the first property.

It is straightforward to show that if $P,Q$ are two nonzero polynomials, $PQ \in S \iff P \in S $ and $Q \in S$.

Since $\Bbb R[x]$ has the unique factorisation property, it is enough to understand who is in $S$ by checking what irreducible polynomial is in $S$.

Obviously, any polynomial with nonnegative coefficient is in $S$ and any one with a positive real root isn't, so we are left with checking polynomials of the form $X^2-aX+b$ for $a,b>0$ and $\delta = a^2/b < 4$.

Let $p(x) = x^2+ax+b$.

We try to show it is in $S$ by multiplying it with $p(-x)$ :
$p(x)p(-x) = x^4 + (2b-a^2)x^2 + b^2$.

It turns out that this polynomial is "less bad" than $p$ : the new polynomial has $\delta' = a'^2/b' = (2-\delta)^2$

If $\delta < 2$ then $p(x)p(-x)$ has nonnegative coefficients, so it is in $S$, which proves that $p$ is in $S$. So values of $\delta$ close to $0$ are good, while values close to $4$ are bad.

The map $f : \delta \mapsto \delta' = (2- \delta)^2$ has two fixed points, $1$ and $4$. Checking the derivatives show that the map is repulsive at $4$. More precisely, $f'(\delta) \in [2 \sqrt 2 ; 4]$ for $\delta \in [2+\sqrt 2 ; 4]$, which shows that $f^n(\delta)$ eventually gets lower than $2$ after some $n = \Theta(-\log(4 - \delta))$ steps.

This shows that every degree $2$ irreducible polynomial $x^2+ax+b$ is in $S$, and now we know exactly which irreducible polynomials are not in $S$ (the $x-a$ for $a>0$)

So $p$ is in $S$ if and only if it has no positive real root.


copy of answer on duplicate:

It is not difficult to see that it is enough to prove this for the case $$F(x)=x^2+ax+b$$ where $F$ is irreducible, i.e. $a^2-4b<0$. So consider $$(x^2+ax+b)\sum_{i=0}^\infty c_ix^i$$ We want to show that we can choose the $c_i$ to be equal to $0$ for $i>N$ for some $N$, and still have all non-negative coefficients. Now: $$c_0b\geq 0\Leftrightarrow c_0\geq 0$$ we set, to make things easier $c_0=1$. Then the next condition reads $$c_0a+c_1b\geq 0\Leftrightarrow c_1\geq -\frac{a}{b}$$ Again, to make things simple, set $c_1=-\frac{a}{b}$. The next condition reads $$c_2b+c_1a+c_0\geq 0\Leftrightarrow c_2\geq \frac{1}{b}(\frac{a^2}{b}-1)=\frac{a^2}{b^2}-\frac{1}{b}$$ Now if $\frac{a^2}{b^2}-\frac{1}{b}\leq 0$ we are done, since we can set $c_2=c_3=\dots=0$. If not, we set $c_2=\frac{a^2}{b^2}-\frac{1}{b}$. The next condition reads $$c_3b+c_2a+c_1\geq 0\Leftrightarrow c_3\geq \frac{1}{b}\left(\frac{a}{b}-\frac{a^3}{b^2}+\frac{a}{b}\right)=\frac{2a}{b^2}-\frac{a^3}{b^3}$$ Again, if $\frac{2a}{b^2}-\frac{a^3}{b^3}\leq 0$ we are done by setting $c_3=c_4=\dots=0$. If not, we set $c_3=\frac{2a}{b^2}-\frac{a^3}{b^3}$. The next condition reads $$c_4b+c_ca+c_2\geq 0\Leftrightarrow c_4\geq \frac{1}{b}\left(\frac{1}{b}-\frac{a^2}{b^2}+\frac{a^4}{b^3}-\frac{2a^2}{b^2}\right)=\frac{a^4}{b^4}-\frac{3a^2}{b^3}+\frac{1}{b^2}$$ This process keeps repeating.

The next step is to find a closed form for this sequence of lower bounds. Therefore we define

$$g(n,m)=\begin{cases} 0 &\text{ if } n>m\\ 0 &\text{ if } n\not\equiv m\mod 2\\ (-1)^{\frac{m+n}{2}}\binom{\frac{m+n}{2}}{n} &\text{ otherwise} \end{cases}$$ Then the conditions read $$c_n\geq \sum_{m=0}^n \frac{a^{m}}{b^{m/2+n/2}}g(n,m)=\frac{1}{b^{n/2}}\sum_{m=0}^n \left(\frac{a}{\sqrt{b}}\right)^mg(n,m)$$ And so it suffices to prove that for all $a,b$ with $a^2-4b<0\Leftrightarrow \frac{a}{\sqrt{b}}<2$, the expression $\sum_{m=0}^n \left(\frac{a}{\sqrt{b}}\right)^mg(n,m)$ will be $\leq 0$ for some $n>0$.

To prove this, we use inspiration from this paper. Write $$H_n(x)=\sum_{m=0}^n (x)^mg(n,m)$$ Then using the substituion $x=2\cosh(z)$ we find that $$H_n(x)=\pm\frac{\sinh((n+1)z)}{\sinh(z)}$$ So $$H_n(x)=0\Leftrightarrow z=\frac{i\pi k}{n+1}, k\in\{1,\dots,n\}\Leftrightarrow x=2\cosh\left(\frac{i\pi k}{n+1}\right)\in(-2,2)$$ From here it is easy to finish the proof: for $n$ big enough we will be able to find $k$ such that $$\frac{a}{\sqrt{b}}\in \left[2\cosh\left(\frac{i\pi k}{n+1}\right),2\cosh\left(\frac{i\pi (k+1)}{n+1}\right)\right]$$ and $H_n(x)$ is non-positive on this interval.

Note that the condition of irreduciblity of $x^2+ax+b$ is nicely used, since it ensures that $\frac{a}{\sqrt{b}}$ is in the range $[-2,2]$ where $H_n(x)$ switches sign

To see what this looks like in the case mentioned by Arthur in the comments: we first create a small table with values for $g(n,n)$:

$$ \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & -1\\ 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 2 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & -3 & 0 & 1\\ 0 & 0 & 0 & 0 & -1 & 0 & 4 & 0 & -3 & 0\\ 0 & 0 & 0 & 1 & 0 & -5 & 0 & 6 & 0 & -1\\ 0 & 0 & -1 & 0 & 6 & 0 & -10 & 0 & 4 & 0\\ 0 & 1 & 0 & -7 & 0 & 15 & 0 & -10 & 0 & 1\\ -1 & 0 & 8 & 0 & -21 & 0 & 20 & 0 & -5 & 0 \end{pmatrix} $$

So the conditions read

$$c_0 \geq 1$$ $$c_1 \geq \frac{1.9}{1}=1.9$$ $$c_2 \geq 1.9^2-1=2.61$$ $$c_3 \geq 2*1.9-1.9^3=3.059$$ $$c_4\geq 3.2021$$ $$c_5\geq 3.02499$$ $$c_6\geq 2.545381$$ $$c_7\geq 1.8112339$$ $$c_8\geq 0.89596341$$ $$c_9\geq -0.108903421$$ This is smaller then $0$, so we can set $c_9=c_{10}=\dots=0$, and find $$T(x)=1+1.9x+2.61x^2+3.059x^3+3.2021x^4+3.02499x^5+2.545381x^6+1.8112339x^7+0.89596341x^8$$ will work, and indeed $$(x^2-1.9x+1)(1+1.9x+2.61x^2+3.059x^3+3.2021x^4+3.02499x^5+2.545381x^6+1.8112339x^7+0.89596341x^8)=1 + 4.440892098500626\dot{}10^{-16} x^3 + 4.440892098500626\dot{}10^{-16} x^6 + 4.440892098500626\dot{}10^{-16} x^7 + 0.108903 x^9 + 0.895963 x^{10}$$ I think it's safe to say that, without rounding errors, we would find $$(x^2-1.9x+1)(1+1.9x+2.61x^2+3.059x^3+3.2021x^4+3.02499x^5+2.545381x^6+1.8112339x^7+0.89596341x^8)=1 + 0.108903 x^9 + 0.895963 x^{10}$$

So it's not suprising that Arthur conjectured that this is a counter example, since it requires a rather arbitrary looking degree $8$ polynomial.


for the other test case: $F(x)=x^4-x+1$: we factor over $\mathbb{R}$, and find one quadratic that already has positive coefficients, and the quadratic $$G(x)=0.713639-1.45427 x+x^2$$ Using the algorithm, we find $$c_0=1, c_1\geq 2.03782 c_2\geq 2.75145, c_3\geq 2.75144, c_4\geq 1.75142, c_5\geq -0.286423$$ so we are done, and $$(0.713639-1.45427 x+x^2)(1+2.03782x+2.75145x^2+2.75144x^3+1.75142x^4)=1.75142x^6+0.204402x^5+0.713639$$