Asymptotic behavior of iterative sequences
Solution 1:
For the moment I will only provide a solution for this problem :
Exercise. Given $x_0 >e^{-C}$ and $x_{n+1}=x_n+f(x_n)$ where $f(t):=\log t + C +o(1)$. Show that $x_n \sim n \log n$.
The following proof has not been proofread yet so there might be some errors (and I hope no real errors), I will proofread when I have time.
Proof. First $(x_n)$ is an increasing sequence and thus have a limit $\ell > e^{-C}$, but if it is finite then $\log \ell + C =0$ but it is not the case.
So for the moment we only have $x_n \to + \infty$. To be more precise one powerful method is the following : $x_{n+1}-x_n \simeq \log x_n$. The "continuous" problem associated is $y'(t) = \log y(t)$ that writes : $$\int_2^{y(t)} \frac{\mathrm{d}u}{\log u} = t- 2$$ hence we get $\frac{\mathrm{d}}{\mathrm{d}t} F(y(t)) = 1$ with : $$F(z)= \int_2^z \frac{\mathrm{d}u}{\log u}.$$ One can see (using by part integration) that : $$F(z) \sim \frac{z}{\log z}$$ as $z \to \infty$, we will use it shortly.
Then it might be interesting to look at the discrete equivalent for $\frac{\mathrm{d}}{\mathrm{d}t} F(y(t)) = 1$ which is : $$\int_{x_n}^{x_{n+1}} \frac{\mathrm{d}u}{\log u} \to 1.$$ In fact we can achieve just integrating the comparaison : $$\log x_n \leqslant \log u \leqslant \log x_{n+1} = \log x_n +o(\log x_n).$$
Thus we have $\displaystyle \int_2^{x_n} \frac{\mathrm{d}u}{\log u} \sim n$ thus we get $\frac{x_n}{\log x_n} \sim n$ and so $\log x_n \sim \log n$ and finally $x_n \sim n \log n$.
Solution 2:
Although I don't have a "final" answer, this suggestion may help.
For polynomials or for analytic functions (having a power series representation with nonzero radius of convergence) I'd employ the concept of Carleman-matrices.
Assume a vectorfunction $V(x) = [1,x,x^2,x^3,...]$ as rowvector and $F$ as carleman-matrix (transposed) for your function $f(x)$ and I for the identity-matrix then we could in principle write
$\qquad V(a_1) \cdot I = V(a_1)$
$\qquad V(a_1) \cdot F = V(f(a_1)) $
but in
$\qquad V(a_1) \cdot (I+F) = V(a)+ V(f(a_1)) \ne V(a_2)$
the sum of two V()-vectors is not a V()-vector.
Instead we define first the Carleman-matrix $G$ for the function $g(x)=x+f(x)$
Then we can iterate:
$\qquad V(a_0) = V(a_0) \cdot I\\
\qquad V(a_1) = V(a_0) \cdot G \\
\qquad V(a_2) = V(a_1) \cdot G = V(a_0) \cdot G^2 \\
\qquad \cdots \\
\qquad V(a_k) = V(a_0) \cdot G^k \\
$
as long as taking the k'th power $G^k $ makes sense (requires only convergent (or as generalization for certain divergent cases for instance Euler-summable) series).
If $G$ is triangular, the formal power series for your iterated expression $a_k$ can exactly be given to any power (even for fractional powers!) and with your initial value $a_o$ it might be convergent up to some highest power of G . For instance, the transposed Carleman-matrix for the function $ f(x) = \ln(1+x)$ is triangular, and also that for $g(x) = x + \ln(1+x) $, however the range of convergence of the formal power series decreases with the iterations...
If $G$ is triangular and $g(x)' = 1$ we can express the power series for the k'th iteration with coefficients, which are polynomials in k and are thus especially easy to compute.
If $G$ is triangular and $g(x)' \notin (0,1) $ we can apply matrix-diagonalization, which gives again exact coefficents for the power series, but are likely more complicated.
(See two examples below)
For non-triangular Carleman-matrices (where $g(0) \ne 0$) this is even more delicate and exceeds the focus of this answer...
Two examples. For $f(x) = \ln(1+x)$ the power-series for the *h*-fold iterate ($f°^h(x)$) begins with
$ \small \begin{eqnarray} f°^h(x) =& x \cdot &(1) \\ &+x^2 \cdot &-( 1/2 h) \\ &+x^3 \cdot & (1/12 h&+1/4 h^2 ) \\ &+x^4 \cdot & -( 1/48 h &+ 5/48 h^2& +1/8 h^3) \\ &+x^5 \cdot & ( 1/180 h& + 1/24 h^2& + 13/144 h^3&+1/16 h^4 ) \\ &+ O(x^6) \end{eqnarray} $
and for $g(x)= x + \ln(1+x) $ the powerseries for the h-fold iterate begins with
$ \small \begin{eqnarray} g°^h(x) =&
x \cdot &(1 u) \\
&+x^2 \cdot &(1/4 u-1/4 u^2) \\
& +x^3 \cdot &(1/36 u-1/8 u^2+7/72 u^3) \\
& +x^4 \cdot &(1/672 u-17/576 u^2+7/96 u^3-181/4032 u^4) \\
& +x^5 \cdot &(11/75600 u-17/4032 u^2+91/3456 u^3-181/4032 u^4+13687/604800 u^5)\\
&+O(x^6) \\
\end{eqnarray} $
where we have to replace $u$ by $2^h$ and h is the iteration-height-parameter.