$\lim_{n\to\infty} f(2^n)$ for some very slowly increasing function $f(n)$
Solution 1:
This is more of a (numerical) plausibility argument to support Gottfried's $\lim\limits_{n \to \infty} f(n) = \frac1{\log\;2}$ conjecture than an answer.
What struck me was that the error ratios $Q(n)$ as $n$ is doubled seem to indicate a decrease in error by a factor of 2. This reminded me of the decrease in error exhibited by the trapezoidal rule for numerical integration as the number of subintervals is doubled, leading me to think that Richardson extrapolation might be appropriate to use in numerically approximating the limit, just like the way Richardson extrapolation is used to improve trapezoidal estimates in Romberg quadrature.
Recall that what Richardson extrapolation essentially does is to fit an interpolating polynomial $p(x)$ to successive values of a sequence $s_i$, along with some auxiliary sequence $x_i$ that tends to 0 as $i\to\infty$ such that $p(x_i)=s_i$, and then take $p(0)$ to be the estimate of the limit of the sequence $s_i$. For this particular case, the error ratio appearing to approach 2 suggests that we take the auxiliary sequence $x_j=2^{-j}$.
The Richardson scheme proceeds as follows: letting $T_j^{(0)}=s_j$, we generate a triangular array of values
$$\begin{matrix}T_0^{(0)}&&&&\\T_1^{(0)}&T_1^{(1)}&&&\\T_2^{(0)}&T_2^{(1)}&T_2^{(2)}&&\\\vdots&\vdots&\vdots&\ddots&\\T_n^{(0)}&T_n^{(1)}&\cdots&\cdots&T_n^{(n)}\end{matrix}$$
whose entries are generated according to the recursive formula
$$T_j^{(n)}=T_{j}^{(n-1)}+\frac{T_{j}^{(n-1)}-T_{j-1}^{(n-1)}}{\frac{x_{j-n}}{x_j}-1}$$
and the $T_n^{(n)}$ (i.e., the "diagonal" elements of the array) form a sequence which (hopefully) converges faster to the limit than the original $s_j$. For this specific example, we have $s_j=f(2^{j+1})$ and $\frac{x_{j-n}}{x_j}=2^n$.
The Richardson scheme can be implemented such that only a one-dimensional array is needed, and this is what I will do in the Mathematica code that follows (specializing to the case $\frac{x_{j-n}}{x_j}=2^n$):
richardson[seq_?VectorQ] := Module[{n = Length[seq], m, res, ta},
ta[1] = First[seq]; res = {First[seq]};
Do[
ta[k + 1] = seq[[k + 1]];
m = 1;
Do[
m *= 2;
ta[j] = ta[j + 1] + (ta[j + 1] - ta[j])/(m - 1);
, {j, k, 1, -1}];
res = {res, ta[1]};
, {k, n - 1}];
Flatten[res]
]
We then generate the (first fourteen members of the) sequence in the OP to 100 significant figures:
pa[n_Integer, x_] := Expand[Sum[(-1)^k*BernoulliB[k]*Binomial[n, k]*
(x^(n - k + 1)/(n - k + 1)),
{k, n}] + x^(n + 1)/(n + 1) - (x + 1)^n]
vals = Table[x/2^j /. FindRoot[pa[2^j - 1, x] == 0, {x, 2^j - 1, 2^(j + 1) - 2},
WorkingPrecision -> 100], {j, 14}];
Last[vals] (* sample value, j == 14 *)
1.442637503359895400534264282420018712540776326004454785114823591142583865448308469439211266125019043
If we apply richardson[]
to vals
, we get the following:
Last[richardson[vals]]
1.44269504088896340735992467998804478392151992973760702424401781700078935678750451268434910460102062
% - 1/Log[2]
-1.0138473535051260244153789098914315899303198623936805672011775182924857213751333727278493`*^-27
and the result of the extrapolation matches $\frac1{\log\;2}$ to ~ 27 digits.
Generating more members of the sequence and computing with more digits might give a more accurate value, but instead of doing this, I decided to apply a different extrapolation algorithm, the Shanks transformation, with the reasoning that if two unrelated algorithms give (almost) the same answer for a given sequence, the answer is likely correct.
I will skip the details of the mathematics behind the Shanks transformation (but see this article) and instead present Mathematica code:
wynnEpsilon[seq_?VectorQ] := Module[{n = Length[seq], ep, res, v, w},
res = {};
Do[
ep[k] = seq[[k]];
w = 0;
Do[
v = w; w = ep[j];
ep[j] =
v + (If[Abs[ep[j + 1] - w] > 10^-(Precision[w]), ep[j + 1] - w,
10^-(Precision[w])])^-1;
, {j, k - 1, 1, -1}];
res = {res, ep[If[OddQ[k], 1, 2]]};
, {k, n}];
Flatten[res]
]
This is essentially the algorithm behind the hidden built-in Mathematica function SequenceLimit[]
, but I have chosen to implement the algorithm explicitly for pedagogical reasons.
We then have the results
Last[wynnEpsilon[vals]]
1.442695040888963407376933999717472189155804331509810458775727421372220122967729979363328849794520
% - 1/Log[2]
1.70093187155800517291583773568245246402780144411109037865448994778022269010133063583652`*^-20
with a result that matches $\frac1{\log\;2}$ to ~ 20 digits. (The slight loss of accuracy is the price for the generality of the Shanks transformation, which did not exploit the behavior of the error ratios.)
The results of these two different extrapolation algorithms lends credibility to the conjecture that the limit is $\frac1{\log\;2}$.
Solution 2:
Conceivably, the function $f(x)$ is within $0.00005$ of $1.44$ up until $x + 2^{2^{2^{10}}}$, but only then does the function suddenly skyrocket. Knowing localized data doesn't tell us anything about the limits of the function without some other sort of knowledge, such as the form of the function, the characteristics of its variation or derivative, etc.