Fastest way to calculate $e^x$ up to arbitrary number of decimals?

What are other faster methods of calculating $e^x$ up to any number of decimals other than using the Taylor series formula?


If I wanted to implement an arbitrary precision approximation to $e^x$ from "scratch" (building on an arbitrary precision numeric library such as GMP) with faster convergence than Taylor series/polynomial approximations, I would design around continued fraction/rational function approximations.

Just as the precision given by the well-known Taylor series can be extended as required by using additional terms, so too will the continued fraction expansion give any required precision by using additional convergents.

At first it may seem that evaluating the continued fraction approximation entails a significant disadvantage in comparison with the power series. Additional terms can be added to the partial sums of the power series, to obtain left-to-right sequential approximations. There is a way to achieve the same effect with continued fractions, namely evaluating the partial numerators and denominators into the convergents through a recurrence relation.

There are a couple of important ideas worth considering even if another method for evaluating the exponential function were to be used on a bounded interval such as $[0,1]$, as for example power series or even lookup tables(!). One is symmetry of the exponential function, and the other is a method of range reduction.

Although the exponential function is not even (or odd), it satisfies a relation:

$$e^{-x} = 1/e^x$$

that implies it is a simple rational transformation of an even function:

$$e^x = (1 + x*f(x^2))/(1 - x*f(x^2))$$

which may be solved for a power series or continued fraction approximation of f(z). This is the idea of symmetry as it applies to the exponential function. Not only does it reduce the evaluaton of the exponential function on the whole real line to just the positive (or the negative) half, it also "contracts" the number of terms needed for a given accuracy by half (retaining only $x^2$ terms in the expansion).

Although a continued fraction or Taylor series approximation to the exponential function may converge for arbitrary real (or complex) values), the rate of convergence is not uniform. The Taylor series converges faster the closer the argument is to zero. Thus a range reduction is important for expressing the exponential function at an arbitrary real value in terms of a value in some interval like $[0,1]$. For example, if the floating point arithmetic is binary (radix 2), it can be especially convenient to use the familiar law of exponents with multiples of $\ln 2$:

$$e^x = 2^k * e^r \ \text{where} \ x = r + k*\ln 2$$

which allows $e^x$ to be evaluated, up to a change in binary exponent, using an approximation that converges rapidly over $[0,\ln 2]$.

Combining these two ideas (symmetry, range reduction) the speed of convergence can be limited to the interval $[0, \ln 2/2]$. Limiting the interval of evaluation may allow you to determine in advance how many terms of the continued fraction or Taylor series expansion have to be retained to obtain a desired accuracy. When this is known the evaluation can be done more efficiently (backwards recursion for continued fractions or Horner's method for truncated Taylor series/polynomials) than if we were forced to continually introduced further terms until the desired accuracy is reached.

Added:

The "faster" continued fraction I had in mind is formula (11.1.2) here:

[Handbook of continued fractions for special functions (Cuyt et al, 2008)]
http://books.google.com/books?id=DQtpJaEs4NIC&dq=exponential+function+continued+fraction+convergence

Their reference is to this modern classic:

The application of continued fractions and their generalizations to problems in approximation theory
A.N. Khovanskii, 1963 (P.Noordhoff)

A neat webMathematic-based site by Andreas Lauschke presents some ideas for accelerating convergence of continued fractions by "contractions". The IP address changes from time to time and cannot be used as a link within StackExchange, but you can Google for it:

[Convergence Acceleration with Canonical Contractions of Continued Fractions: Exponential function -- Andreas Lauschke Consulting]

I have some notes on his formulas (derived by contraction from the one noted above) if that would be helpful. Some related material was contributed to Wolfram Demonstrations.

Computing the constant $\ln 2$

Generations of math students have been introduced to the concept of conditional versus absolute convergence by the example of the alternating harmonic series:

$$\ln 2 = 1 - 1/2 + 1/3 - 1/4 + ...$$

Of course this series, derived from a power series expansion of $\ln x$ around $x = 1$, has such slow convergence that even if we combine consecutive pairs of terms:

$$\ln 2 = 1/2 + 1/12 + 1/30 + ...$$

the resulting absolutely convergent series is useless for obtaining arbitrary precision values of $\ln 2$. For convenience the first seven partial sum of this are:

0.50000000...  
0.58333333...  
0.61666666...  
0.63452381...  
0.64563492...  
0.65870518...  

Since $\ln 2$ is 0.69314718..., we have an error of about a third of a unit in the first decimal place. In other words not much more convergence than one decimal place correct.

It therefore makes a striking contrast with the nice convergence of a continued fraction expansion of the same value:

$$\ln 2 = \cfrac{1}{1 + \cfrac{1}{2 + \cfrac{1}{3 + \cfrac{4}{4 + \cfrac{4}{5 + \cfrac{9}{6 + \cfrac{9}{7 + ...}}}}}}}$$

The first seven convergents are:

0.66666666...  
0.70000000...  
0.69230769...  
0.69333333...  
0.69312169...  
0.69315245...  

The error here is about half a unit in the fifth decimal place.


The theoretically fastest way appears to be to use Newton iteration to reduce the problem to computing the logarithm function and then using an algorithm based on the arithmetic-geometric mean to compute the logarithm. See this wikipedia page. http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations

In practice the Taylor series should work fine, given a good implementation. The following webpage http://numbers.computation.free.fr/Constants/constants.html has an example impletation of using the taylor series to compute e. They claim it took 0.02 seconds to compute e to a thousand decimals on a PentiumII, 450 MHz processor.