Hy, I hope everyone is doing fine. Lately I have been studying the topic of probability, I am aiming to improve on it, recently i came across this hard problem.

Let $(u_{n})$ a sequence of random independent variables identically distributed following Rademacher distribution. And let $f(x)=\sum_{n \geq 0} u_{n} x^{n}$.

Prove that : $$ f(x)~~\text{almost surely has infinitely many zeros in}~~ [0,1].$$

I have been stuck on it for weeks now, but I did find out a hint to solve it on a forum by searching about it, Here is the hint: Construct an increasing sequence $(x_{k})$ such that the event $A_{k}=\{f(x_{0}),...,f(x_{k}) \text{are not zero and have the same sign} \}$ are such that $p(A_{k+1})\leq \frac{6}{7} p(A_{k})$.

I find it difficult to construct by induction such sequence in a way that may permit us to solve the problem. Any proposition is welcome.


Let $t_n = 1-\frac{1}{3^{3^{2n}}}$ and $K_n = 3^{3^{2n-1}}$ for $n \ge 1$. We claim that, for all $n$ large (maybe $n \ge 10$ is fine), with probability at least $1-\frac{1}{n^2}$, the sign of $\sum_{k \ge 1} u_kt_n^k$ is the same as the sign as $\sum_{K_n \le k < K_{n+1}} u_kt_n^k$. Note that $|\sum_{k \ge K_{n+1}} u_k t_n^k| \le \sum_{k \ge K_{n+1}} t_n^k \le \frac{t_n^{K_{n+1}}}{1-t_n} = (1-\frac{1}{3^{3^{2n}}})^{3^{3^{2n+1}}}3^{3^{2n}} \approx e^{-3^{3^{2n+1}}/3^{3^{2n}}}3^{3^{2n}} = e^{-3^{3^{2n}}}3^{3^{2n}} \approx e^{-3^{3^{2n}}}$. And, $|\sum_{k < K_n} u_kt_n^k| \le K_n = 3^{3^{2n-1}}$. Therefore, if $|\sum_{K_n \le k < K_{n+1}} u_kt_n^k| \ge 2\cdot 3^{3^{2n-1}}$ (say), then $\sum_{k \ge 1} u_k t_n^k$ has the same sign as $\sum_{K_n \le k < K_{n+1}} u_k t_n^k$. And with probability at least $1-\frac{1}{n^2}$, we'll have $|\sum_{K_n \le k < K_{n+1}} u_kt_n^k| \ge 2\cdot 3^{3^{2n-1}}$. Indeed, $\mathbb{E}\left[|\sum_{K_n \le k < K_{n+1}} u_k t_n^k|^2\right] = \sum_{K_n \le k < K_{n+1}} t_n^{2k} \ge 3^{3^{2n}}$, and we have concentration since everything is i.i.d.. So by Borel-Cantelli, with probability $1$, for all but finitely many $n$ we will have that $\sum_{K_n \le k < K_{n+1}} u_kt_n^k$ has the same sign as $\sum_{k \ge 1} u_kt_n^k$; clearly the signs of $\sum_{K_n \le k < K_{n+1}} u_kt_n^k$ are independent as $n$ ranges and each half $50-50$ probability of being positive or negative, so with probability $1$ we get infinitely many sign changes and thus infinitely many roots.


I have a solution without using the advice you mentionned.

You can write : $f(x)=\sum_{n \geq 0} u_{n} x^{n} = f_n(x) + R_n(x)$, where $f_n(x) = \sum_{k=0}^n u_n x^n$ and $R_n(x) = \sum_{k\geq n+1} u_n x^n$. Intuitively, what happens is quite clear : the graph of $f$ should oscillate up and down, somewhat like a random walk, when approaching the line of equation $x=1$. The problem is to show it rigorously.

With this intuition in mind, a nice idea is to define an increasing sequence $x_n$ such that $lim_{n\rightarrow \infty} x_n = 1$ and $f(x_n)$ is near to $f_n(1) = \sum_{k=0}^n u_k$. I will denote $s_n = \sum_{k=0}^n u_k$, the random walk.

Let us compare : $f(x) - f_n(1) = R_n(x) + (f_n(x) - f_n(1))$

Let us see what happens if we make easy majorations.

It is easy to show that $|R_n(x)| \leq \frac{x^{n+1}}{1-x}$.

On the other hand, $|f_n(x) - f_n(1)| \leq \sum_{k=0}^n (1 - x^k) \leq (\sum_{k=1}^n \sum_{s=0}^{k-1} x^s) (1-x) \leq \frac{n(n+1)}{2} (1-x)$.

We would like to build $x_n$ such that the two following quantities are small : $\frac{n(n+1)}{2} (1-x_n)$ and $\frac{x_n^{n+1}}{1-x_n}$ Let us try $x_n = 1 - \frac{1}{n}$. The first quantity is equivalent to $\frac{n}{2}$. The second is equivalent to $\frac{e^{-2}}{2}n $. So $|f(x) - s_n| \leq \frac{1 + e^{-2} + \epsilon}{2} n$ for $n$ big enough. Unfortunately, it is too big, but we are quite close. We need a majoration weaker than $\beta \sqrt n$ where $0 < \beta < 1$, because $\frac{s_n}{\sqrt{n}}$ has almost surely lim sup equal 1 and lim inf equal -1.

It is possible to have sharper majorations be using Abel lemma. I will make it quite fast, I encourage you to make the calculations yourself to check. The philosophy is the following : $|s_n|$ can be trivially majorated by $n$, but we can for $\epsilon>0$ majorate it almost surely by $C(u,\epsilon) + (1+\epsilon)\sqrt{n}$ where $C(u, \epsilon)>0$ is a constant depending only on $u$ and $\epsilon$ (not on $x$ or on $n$).. This enables to win a $\sqrt{n}$ in both majorations.

We have : $ R_n(x)= -\sum_{k\geq n+1} (\sum_{s=n+1}^k u_s) (x^{k+1} - x^k) $. Let $\epsilon>0$ . Then, you can check that almost surely, $|R_n(x)| \leq C(u, \epsilon)(1-x) + (1+\epsilon)\sum_{k \geq n+1} \sqrt{k-n} (x^k - x^{k+1})$ Where $C(u, \epsilon)>0$ is a constant depending only on $u$ and $\epsilon$ (not on $x$ or on $n$). The majorant of $|R_n(x)|$ should behave like this when $x\rightarrow 1$ : $\frac{x^n}{\sqrt{1-x}}$ (there is probably a multiplicative constant missing).

For $|f_n(x) - f_n(1)|$, using the same trick, I think you can get $n\sqrt{n}$ for the majoration of the second term (multiplicative constant missing).

So you take again $x_n = 1-\frac{1}{n}$ as before and you win a $\sqrt{n}$ in both terms, so it should be just sufficent to conclude ! ... Quite magical isn't it ?