For every real number $x_1$ construct the sequence $x_1,x_2,x_3,\ldots$ by setting $x_{n+1}=x_n(x_n+\frac{1}{n})$ for each $n \ge 1$. Prove that there exists exactly one value of $x_1$ for which $0 < x_n < x_{n+1} < 1$ for every $n$.


Assume that there is some $x_1$ such that $0<x_n<x_{n+1}<1$ then $\{x_n\}$ is monotonically increasing and bounded. Hence converges by monotone convergence theorem.

Let $\lim\limits_{n\rightarrow \infty}{x_n}=l$, then obtain $l^2=l \Rightarrow l=0,1$, we can readily omit $0$ and hence $x_n\rightarrow 1$.


Proof for uniqueness of $x_1$.

Assume for the first term $x_1$ and $a_1$, the sequence $\{x_n\}$ and $\{a_n\}$ converges. Without loss of generality assume $x_1>a_1$(Recall that the definition states that all terms of the sequence are positive). Using induction we can prove that $x_n>a_n \quad\forall n\in \mathbb{N}$.

As $x_n$ and $a_n$ both converges to $1$, there exits an $N$ such that for all $n\ge N\implies \frac{3}{4}<a_n<x_n<1 $ (Take $\varepsilon=\frac{1}{4}$)

Observe that $$x_{n+1}-a_{n+1}=\left(x_n-a_n\right)\left(x_n+a_n\right)+\frac{1}{n}\left(x_n-a_n\right)>\frac{3}{2}(x_n-a_n)$$

Hence $$\begin{array}{rcl}\frac{x_{n+1}-a_{n+1}}{x_n-a_n}&>&\frac{3}{2}\implies\\ x_{n+1}-a_{n+1}&>&\left(\frac{3}{2}\right)^{n+1-N}\cdot(x_N-a_N) \\ \end{array}$$

The divergence of $x_n-a_n$ is clear. A contradiction.


To prove the existence, denote $f_{1}(x)=x$ and $f_{n+1}(x)=f_{n}(x)\left(f_{n}(x)+\frac{1}{n}\right)$. Using induction prove that $f_{n}(x)$ is a strictly increasing function in $x$.

Let $b_n$, $c_n$ be such that $f_{n}(b_n)=1-\frac{1}{n}$ and $f_{n}(c_n)=1$, the existence and uniqueness of $b_n$, $c_n$ follows from monotonicity of $f_n$, the observations $f_{n}(0)=0$, $f_{n}(1)>1$ and the continuity of $f_{n}$. Also observe that $b_n<c_n$ and using induction that $b_n$ is strictly increasing and $c_n$ is strictly decreasing sequences.

The set $A=\{b_n\}$ is bounded above. Hence by least upper bound property of $\mathbb{R}$ there exists a $\gamma =\sup \{b_n\}$. We shall show that $\gamma$ is indeed our $x_1$!

$$\color{blue}{f_{n+1}{(\gamma)}}=f_{n}(\gamma)\left(f_n{(\gamma)}+\frac{1}{n}\right)\color{blue}{>}f_{n}(\gamma)\left(f_{n}(b_n)+\frac{1}{n}\right)=\color{blue}{f_{n}(\gamma)>0}$$ $$\color{blue}{1}=f_{n}(c_{n})\color{blue}{>f_{n}(\gamma)} \quad \forall n $$ $$\implies 1>f_{n+1}{(\gamma)}>f_{n}(\gamma)>0 \qquad \square$$


Proofs of interest

$f_n(x)$ has non negative integer coefficients hence $f_n(x)$ is strictly increasing on $[0,1]$.

Notice that $\color{blue}{f_{n+1}(b_n)}=f_{n}(b_n)\left(f_{n}(b_n)+\frac{1}{n}\right)=1-\frac{1}{n}\color{blue}{<}1-\frac{1}{n+1}=\color{blue}{f_{n+1}(b_{n+1})} \implies b_n< b_{n+1}$ or $b_n$ is a strictly increasing sequence.

Notice that $\color{blue}{f_{n+1}(c_n)}=f_{n}(c_n)\left(f_{n}(c_n)+\frac{1}{n}\right)=1+\frac{1}{n}\color{blue}{>}1=\color{blue}{f_{n+1}(c_{n+1})} \implies c_{n+1}<c_n$ or $c_n$ is a strictly decreasing sequence.

The fact that $b_n$ is bounded is clear as $b_n<c_1 \quad \forall n$.


This is not a separate answer, but rather a (too long) comment in response to the proper answer by boywholived. Sure, there exists exactly one such value of $x_1$, but the comment by barak manos still stands: any chance you can tell us what that value of $x_1$ is? Here is a little computer program that does the job:

program IMO_1985;
procedure find; const LARGE : integer = 50; var n,i : integer; x,x_1,bit : double; H,L : boolean; begin { Initialize } x_1 := 1/2; bit := 1/2; for i := 0 to LARGE do begin n := 1; H := true; L := true; { Writeln(n:2,' : ',x_1); } x := x_1; while true do begin x := x*(x+1/n); { Writeln((n+1):2,' : ',x); } { Decreasing towards zero: } L := (x < 1-1/n); { Exploding towards infinity: } H := (x > 1); if H or L then Break; n := n+1; end; { Readln; } { Find x_1 bit by bit: } bit := bit/2; if H then x_1 := x_1 - bit; if L then x_1 := x_1 + bit; end; Writeln(x_1:0:16); end;
begin find; end.
The program uses a few easy to prove statements about the sequence:

  • If the sequence $x_n$ is decreasing for some value of $n$ ( $x_{n+1} < x_n$ ) then it will remain that way until it reaches a limit, which is zero.
  • The sequence is decreasing if and only if $x_n < 1-1/n$ . Due to the previous statement, we can put a Break in the calculations then. And conclude that our estimate of the initial value $x_1$ must have been greater (by one bit).
  • If some value of $x_n$ becomes greater than $1$ then the sequence will be increasing forever (i.e. it explodes). We can put a Break in the calculations then. And conclude that our estimate of the initial value $x_1$ must have been smaller (by one bit).
It turns out that the convergence of the sequence towards the desired limit $\;x_n \to 1\;$ is extremely unstable numerically. So what we actually have is a switch between $\,0\,$ and $\,\infty\,$ that generates the outcome for $x_1$ bit by bit, until double precision is reached.
Now I almost forgot to tell you what the outcome is: $$ x_1 \approx 0.4465349172642295 $$ Reverse method. If a method is (numerically) unstable, then try the reverse. Because it's not uncommon that the latter will be stable then. So let's solve $x_n \ge 0$ from $x_{n+1} = x_n(x_n+1/n)$: $$ x_n^2 + \frac{1}{n} x_n - x_{n+1} = 0 \quad \Longrightarrow \quad x_n = -\frac{1}{2n}+\sqrt{\left(\frac{1}{2n}\right)^2+x_{n+1}} = \frac{x_{n+1}}{\frac{1}{2n}+\sqrt{\left(\frac{1}{2n}\right)^2+x_{n+1}}} $$ The latter trick is for reasons of numerical stability as well. Now start with some LARGE value of $n$ and put $x_{n+1}=1-1/(n+1)$. Then iterate backwards until $x_1$ is reached. The resulting (Pascal) program has been an almost one-liner (i.e. without the Error Analysis):

program IMO_1985;
procedure find; const LARGE : integer = 50; var n : integer; x,E : Extended; begin { Initialize } x := 1-1/(LARGE+1); E := 1/(LARGE+1); for n := LARGE downto 1 do begin x := x/(sqrt(1/sqr(2*n)+x)+1/(2*n)); { Error Analysis: } E := E/(2*x+1/n); end; Writeln('Outcome x_1 = ',x:0:16); Writeln(' # decimals = ',-Trunc(ln(E)/ln(10))); Writeln('# good bits = ',-Trunc(ln(E)/ln(2))); end;
begin find; end.
Needless to say that the outcome is the same as with the previous method.

Error Analysis. Estimate with infinitesimals: $$ x_{n+1} = x_n(x_n+1/n) \quad \Longrightarrow \quad dx_{n+1} = (2x_n+1/n)dx_n $$ Initialize with $\;dx_{n+1} = 1/(n+1)$ . It follows that: $$ dx_1 = \frac{dx_{n+1}}{(2x_1+1)(2x_2+1/2)(2x_3+1/3)\cdots(2x_n+1/n)} $$ The Reverse method program has been slightly modified to take this into account. The variable E corresponds with the errors $dx_n$ . Minus the 10-logarithm of the error $dx_1$ in $x_1$ is the number of significant decimals, $16$ in our case, which indeed seems to be alright. Minus the 2-logarithm of the error $dx_1$ in $x_1$ is the number of significant bits: $53$ . This number, not at all by coincidence (why?), is close to the number LARGE $=50$ in both programs.

Update. Actually, our value $\,x_1\,$ as well as the error $\,dx_1\,$ are functions of $\,n$ , so let's replace $\,x_1\,$ by $\,X(n)\,$ for all finite values of $\,n\,$ and likewise $\,dx_1\,$ by $\,\epsilon(n)$ : $$ \epsilon(n) = \frac{1/(n+1)}{(2x_1+1)(2x_2+1/2)(2x_3+1/3)\cdots(2x_n+1/n)} \quad \Longrightarrow \\ \epsilon(n) = \epsilon(n-1)\frac{n/(n+1)}{2x_n+1/n} $$ With $\,1-1/n < x_n < 1\,$ we have $\,2-1/n < 2x_n+1/n < 2+1/n\,$ and: $$ \epsilon(n) < \epsilon(n-1)\frac{1}{(1+1/n)(2-1/n)} = \frac{1}{2+1/n-1/n^2} < \frac{1}{2} \quad \Longrightarrow \quad \epsilon(n) < \epsilon(1)\left(\frac{1}{2}\right)^{n-1} $$ For $\,n=1\,$ we have $\,1 < 2x_1+1 < 3\,$ hence $\,\epsilon(1) < 1/2$ . This proves a
Lemma. (see above: so that's why!) : $$ \epsilon(n) < \left(\frac{1}{2}\right)^n $$ Note. I'm not quite sure of this, because $\,\epsilon(1)=dx_1=1/2\,$ is not an infinitesimal ( i.e sufficiently small ) error; perhaps $\,\epsilon(n)\,$ must be multiplied with a proper constant?
Theorem. As expected: $$ \lim_{n\to\infty} X(n) = x_1 $$ Proof. For all real $\epsilon > 0$ there exists an integer $N > 0$ such that if $n > N$ then $\left|X(n) - x_1\right| < \epsilon$ .
Indeed, define $N$ by $\,N = \ln(1/\epsilon)/\ln(2)$ , then $\,\epsilon = (1/2)^N\,$ and for $n > N$ we have with the above Lemma: $\,\left|X(n)-x_1\right| < (1/2)^n < \epsilon$ . This completes the proof.
Disclaimer: though apart from a few technicalities perhaps, see the above Note.