Is there an efficient method for the calculation of $e^{1/e}$?

(I wonder whether this is appropriate for the Math StackExchange or whether it'd be better on Stack Overflow as it deals with computing, but I'm asking about mathematical details, not about programming details, so I suspect here is better. But if it should be put somewhere else, feel free to move it.)

I was wondering about this: the computation of $e^{1/e}$ -- what I call the "tetrational constant" or "$\eta$" since it is important in tetration theory -- to a very large number of digits, similar to how $\pi$ and $e$ have been computed. In the case of $e$, it is possible to compute it fairly efficiently by

$$e = \sum_{n=0}^{\infty} \frac{1}{n!}$$.

Naively summing this series with simple multiplication methods yields a time complexity of $O(n^2)$ (or perhaps a little worse if we consider very large $n$ that don't fit in a computer's word, but there's also an improving factor due to the fact $\frac{1}{n!}$ converges to $0$ somewhat faster than exponentially -- haven't worked this all out), where $n$ is the number of digits to compute. We can go much faster, however, if we apply what is called "binary splitting", which changes the cost from $O(n^2)$ to $O(\log(n)\ M(n))$, where $M(n)$ is the cost to multiply two $n$-digit numbers. Using classical grade-school multiplication with $M(n) = O(n^2)$ complexity gives no advantage; in fact, it is worse with $O(n^2 \log(n))$ complexity instead of just $O(n^2)$. But if we leverage the faster FFT technique with $M(n) = O(n \log(n))$ (idealized bound -- in practice we have limited precision with which to do the FFT, and so we must eventually pack fewer digits per FFT element, and eventually would have to increase the precision of the numbers used in the FFT.), then we have $O(n \log(n)^2)$, which is much better than the naive approach.

"Binary splitting" can be in turn generalized to arbitrary hypergeometric series. But consider now $e^{1/e}$. Plugging $\frac{1}{e}$ into $\exp$'s series yields a naive algorithm in which we must do full-precision work just to compute terms of the series -- after computing $\frac{1}{e}$! This algorithm has bound $O(n\ M(n))$ (or perhaps a little better, due to the same considerations from the convergence of reciprocal factorial as for $e$), which is pretty bad -- with classical multiplication it's (somewhat better than, but worse than $O(n^2)$) $O(n^3)$ and even using the FFT technique, it's still $O(n^2 \log(n))$ -- worse than computing $e$ via the purely-classical approach! Trying to use the binary splitting doesn't save us jack, since full precision is required for every term. We can try and get another series by taking $f(x) = e^{e^x - 1} - 1$ at $x = -1$ to compute $\frac{e^{1/e}}{e} - 1$ as

$$\frac{e^{1/e}}{e} - 1 = \sum_{n=1}^{\infty} (-1)^n \frac{B_n}{n!}$$

where $B_n$ are the Bell numbers. This has only rational terms. But this series is not hypergeometric, so the binary splitting does not apply. Worse yet, it converges even slower than the first approach, and given the lack of a simple recurrence for its terms, suggests we don't save anything by this method. So, is there an efficient formula or algorithm to compute $e^{1/e}$? Note that $\pi$ can also be computed by binary splitting on, say, the Chudnovsky formula, and by the AGM iteration (Gauss-Legendre and other methods), both of which yield algorithms (when FFT multiplication is employed) with complexities of the form $O(n \log(n)^k)$ for some positive integer $k$. So that complexity -- $O(n \log(n)^k)$ -- is what I take as an "efficient" algorithm for the purposes of this question.


I would use Newton's method on the function $f$ defined on $(1,\infty)$ by: $$f(x)=\log\left(\frac{1}{\log(x)}\right)-1$$

$$f'(x)=\frac{-1}{x\log(x)}$$

Newton's method converges quadratically.

The computation of the logarithm could use the Arithmetic-Geometric Mean that also converges quadratically. (See this article)

It means that asymptotically, to obtain $n$ digits you would need $O(\log(n)^3)$ iterations, which is quite good.