Let's define the sequence $\{s_n\}$ recursively as $$s_1=\sqrt2,\ \ \ s_{n+1}=\sqrt2^{\,s_n}.$$ Or, in other words, $$s_n=\underbrace{\sqrt2^{\sqrt2^{\ .^{\ .^{\ .^{\sqrt2}}}}}}_{n\ \text{levels}}.$$ The sequence is monotonically growing, and rapidly converges to a limit $$\lim\limits_{n\to\infty}s_n=2.$$ I'm interested in estimating its convergence speed.

Based on numerical data, I conjectured that $$\ln\left(2-s_n\right)=n\ln\ln2+c_{\sqrt2}+O\big(\left(\ln2\right)^n\big)$$ for some constant $c_{\sqrt2}\approx-0.458709787761420587059021...$

Could you suggest possible approaches to prove (or refute) this conjecture?
I am also interested in a possible closed form of the constant $c_{\sqrt2}$.


Update: We can try to generalize this problem to other bases beyond $\sqrt2$. Let's use a usual notation for tetration $${^n}a=\underbrace{a^{a^{\ .^{\ .^{\ .^a}}}}}_{n\ \text{levels}}.$$ It's known that for all $1/e^e<a<e^{1/e}$ there exists a limit$${^\infty}a=\lim\limits_{n\to\infty}{^n}a=e^{-W\left(-\ln a\right)},$$ where $W(z)$ is the Lambert $W$ function, the inverse of the function $x\mapsto x\,e^x$.

I conjecture that for all $1<a<e^{1/e}$ $$\ln\left({^\infty}a-{^n}a\right)=n \ln\ln\left({^\infty}a\right)+c_a+O\left(e^{n\ln\ln\left({^\infty}a\right)}\right),$$ where $c_a$ is some constant that depends on $a$ but not on $n$ (also note that $\ln\ln\left({^\infty}a\right)<0$, so the last term is exponentially small).


Solution 1:

Here's proof of a weaker result. Let $r_n = 2 - s_n$. $$ r_{n+1} = 2 - s_{n+1}= 2 - \sqrt{2}^{s_n}= 2\left(1 - \sqrt{2}^{s_n-2}\right)= 2\left(1 - \sqrt{2}^{\,-r_n}\right) $$ Now, let $\alpha = \ln 2$. $$ \sqrt{2}^{\,-r_n}= \left(e^{\ln \sqrt{2}}\right)^{-r_n}= e^{-\frac{1}{2}\alpha r_n}= 1-\frac{1}{2}\alpha r_n + O\left(r_n^2\right) $$ and so $$ r_{n+1}=\alpha\,r_n + \phi\left(r_n\right),\quad \phi\left(s\right) = O\left(s^2\right) $$ Hence, $$ \ln r_{n+1}= \ln\left(\alpha\,r_n + O\left(r^2_n\right)\right)= \ln\alpha + \ln r_n + \ln\left(1+O\left(r_n\right)\right)= $$ $$ =\ln\alpha + \ln r_n + \psi\left(r_n\right),\quad \psi(s)=O\left(s\right) $$ and $$ \ln r_n = \ln r_1 + \left(n-1\right) \ln \alpha + \sum_{k=2}^n\psi\left(r_k\right) $$ This last sum is convergent (proof below), so the rate of convergence is determined by behaviour of $\psi\left(r_n\right)$. Since $\psi\left(s\right)=O(s)$, it's in fact determined by behaviour of $r_n$.

Lemma For any $\varepsilon > 0$, we have $\left|r_n\right|=O\left(\left(\alpha + \varepsilon\right)^n \right)$

Proof Let $\varepsilon > 0$. As $r_n\to 0$ and $\phi\left(s\right)=O\left(s^2\right)$ for sufficiently large $n$ we have $\left|\phi\left(r_n\right)\right|<\varepsilon\, \left|r_n\right|$, and so $$ \left|r_{n+1}\right|= \left|\alpha\,r_n+\phi\left(r_n\right)\right| < \left(\alpha + \varepsilon\right)\left|r_n\right| $$ and it is clear we can bound $\left|r_n\right|$ by geometric sequence with ratio $\alpha + \varepsilon$.

Using this, we can easily bound the tail of the sum by $O\left(\left(\alpha + \varepsilon\right)^n\right)$, and so, to conclude $$ \ln\left(2 - s_n\right) = n\ln\ln 2 + c + O\left(\left(\ln 2 + \varepsilon\right)^n\right) $$ for every $\varepsilon > 0$, where $c=\ln\left(2 - \sqrt{2}\right) - \ln\ln 2 + S$, where $S$ is sum of the series discussed above.

Of course, that's strictly weaker than the conjuecture from question. Still, I believe it's close enough to be at least relevant.

Solution 2:

This is the fixed point iteration method for solving an equation of the form $x=f(x)=(\sqrt{2})^x$ by iterating $x_{n+1}=f(x_n)$, and it is very commonly discussed in numerical analysis textbooks.

In general, if the solution is $x_\infty = f(x_\infty)$, the error of the $n$-th term behaves as $$ \epsilon_{n+1} = x_{n+1}-x_\infty = f(x_n) - f(x_\infty) \approx f'(x_\infty)(x_n - x_\infty) = f'(x_\infty)\epsilon_n, $$ $$ \epsilon_n = (f'(x_\infty))^n \times O(1), $$ which is where the $n\log\log 2 = n\log f'(2)$ comes from in your data.

The constant term in $\log\epsilon_n - n \log f'(x_\infty)$ is determined not just by the method or the equation, but also by the initial condition $s_1=\sqrt2$, which is why it is very often not analysed. The initial guess at the solution can be anything at all, really, as long as it's close enough to the solution for the method to converge. Thus if the method converges to the same value from most starting values, why bother singling out a specific arbitrarily-chosen one?

In this case the limit $c_{\sqrt2} = \lim y_n$ comes from the recurrence relation $$ y_0 = 1, \qquad \log y_{n+1} = \frac{x_\infty}{(f'(x_\infty))^{n+1}}\Big(f\big((f'(x_\infty))^n\log y_n\big)-1\Big). $$ This limit is easy enough to compute numerically, but there might not be a way to do it analytically.