How do I develop numerical routines for the evaluation of my own special functions?

This question has been cross-posted to ComputationalScience.SE here.

When performing computational work, I often come across a univariate function, defined in terms of an integral or differential equation, which I would like to rapidly evaluate (say, millions of times per second) over a specified interval to a given precision (say, one part in $10^{10}$). For example, the function $$ f(\alpha) = \int_{k=0}^\infty \frac{e^{-\alpha^2 k^2}}{k+1}\ \mathrm{d}k $$ over the interval $\alpha \in (0,10)$ came up in a recent project. Now it happens that this integral can be evaluated in terms of standard special functions (in particular, $\operatorname{Ei}(z)$ and $\operatorname{erfi}(z)$), but suppose we had a much more complicated function for which no such evaluation was known. Is there a systematic technique I can apply to develop my own numerical routines for the evaluation of such functions?

I am sure plenty of techniques must be out there, as fast algorithms seem to exist for basically all of the common special functions. However, I emphasize that the sort of technique I am looking for should not rely on the function having some particular structure (e.g. recurrence relations like $\Gamma(n+1) = n\Gamma(n)$ or reflection formulas like $\Gamma(z) \Gamma(1-z) = \pi \csc(\pi z)$). Ideally, such a technique would work for just about any (sufficiently well-behaved) function I come across.

You can take for granted that I do have some slow method of evaluating the desired function (e.g. direct numerical integration) to any precision, and that I'm willing to do a lot of pre-processing work with the slow method in order to develop a fast method.


Solution 1:

Unfortunately, there is no single approach that will lead to robust, accurate, and high-performance implementations across the large universe of special functions. Often, two or more methods must be used for different parts of the input domain, and the necessary research and implementation work may take weeks for elementary functions and months for higher transcendental functions.

Since considerable mathematical as well as programming skill is required to produce high-quality implementations, my first recommendation would be to utilize and leverage existing mathematical libraries as much as feasible. These could be commercial libraries, such as the NAG numerical libraries or the IMSL numerical libraries by RogueWave, or open source libraries such as the GNU Scientific Library (GSL) or the math and numerics portions of the Boost library. You may also find relevant source code in online repositories such as Netlib's collected algorithms from ACM TOMS.

In practical terms, on modern SIMD-enhanced processors, the extensive use of tables is no longer advisable and (piecewise) polynomial approximations are usually the most attractive. The reason table-based methods have fallen into disfavor on high-performance processor architectures is that over the past decade the performance of functional units (measured in FLOPS) has grown much faster than the performance of memory sub-systems (measured in GB/sec). The reasoning in the following paper matches my own professional experience:

Marat Dukhan and Richard Vuduc, "Methods for high-throughput computation of elementary functions". In Parallel Processing and Applied Mathematics, pp. 86-95. Springer, 2014. (slides)

In terms of performance, the polynomial approximations benefit from the fused multiply-add operation (FMA) present in modern processor hardware (both CPUs and GPUs). This operation also helps reduce round-off errors while offering some protection against subtractive cancellation. For smallest error and best efficiency, one would want to use minimax approximation.

Commonly used tools such as Maple and Mathematica have built-in facilities to generate these. While they generate approximations that are (very close to) optimal in the mathematical sense, they do not typically account for the error incurred by representation of coefficients and evaluating operations in limited floating-point precision. The Sollya tool offers this functionality through its fpminimax command. Lastly, you can write your own approximation code which would likely be based on the Remez algorithm.

For some functions, polynomial approximations are not really practical, as too many terms would be required to reach IEEE-754 double precision. In those cases, one can chose one of two strategies.

The first strategy is to cleverly transform the input arguments, using basic arithmetic and simple elementary functions, such that the resulting function is "well-behaved" with respect to polynomial approximation. Typically such transformation tends to "linearize" the function to be approximated. A good instructional example of this approach is the computation of erfc in the following paper:

M. M. Shepherd and J. G. Laframboise, "Chebyshev Approximation of $(1 + 2x)\exp(x^2)\operatorname{erfc} x$ in $0 \leqslant x < \infty$". Mathematics of Computation, Vol. 36, No. 153 (Jan., 1981), pp. 249-253 (online)

The second approach is to use the ratio of two polynomials, that is, a rational approximation, for example in the form of a Padé approximant. The tools already mentioned can help with that; there is also a copious amount of literature published on rational approximation, which in general is a harder problem than polynomial approximation.

For special functions (as opposed to elementary functions), straightforward polynomial and rational approximations are often inaccurate and/or inefficient. They require the application of more advanced mathematical concepts such as asymptotic expansions, recurrence relations, and continued fraction expansions. Even if the use of these solves the problem mathematically, there may still be numerical issues, for example when evaluating continued fractions in the forward direction. Not surprisingly, entire books have been written on the computer evaluation of certain functions, such as Bessel functions and Mathieu functions.

In the following, I am giving a quick overview of useful literature, starting with coverage of the mathematical foundations, moving to methods suitable for elementary functions and simple special functions such as erfc and tgamma, and finally advanced methods for special functions that are harder to compute both in terms of performance and accuracy. Obviously, this can only scratch the surface, much relevant material on particular functions can be found in individual papers, for example in journals and proceedings from AMS, SIAM, ACM, and the IEEE.

Much of the literature has not yet caught up to modern hardware and software environments, in particular the presence of the FMA operation and SIMD architectures. In terms of robust computer codes for the evaluation of mathematical functions, one could wish for closer co-operation between mathematics and science on one hand, and computer science and computer engineering on the other hand. Among the works below, those by Markstein and Muller are the most advanced in this regard.

Milton Abramowitz and Irene A. Stegun (eds.), "Handbook of Mathematical Functions. With Formulas, Graphs, and Mathematical Tables". New York, NY: Dover 1972 (online version)

Frank Olver, et. al. (eds.), "NIST Handbook of Mathematical Functions". New York, NY: Cambridge University Press 2010 (online version)

A. Erdelyi, et. al., "Higher Transcendental Functions." Vol. 1-3. New York, NY: McGraw-Hill 1955

Oskar Perron, "Die Lehre von den Kettenbrüchen, 3rd ed." Vol. 1+2. Stuttgart (Germany): Teubner 1954, 1957


John F. Hart, "Computer Approximations". Malabar, FL: Krieger Publishing 1978

William J. Cody and William Waite, "Software Manual for the Elementary Functions". Englewood Cliffs, NJ: Prentice-Hall 1980

Peter Markstein, "IA-64 and Elementary Functions". Upper Saddle River, NJ: Prentice-Hall 2000

Jean-Michel Muller, "Elementary Functions. Algorithms and Implementation 3rd. ed.". Birkhäuser 2016

Nelson H. F. Beebe, "The Mathematical-Function Computation Handbook". Springer 2017

Jean-Michel Muller, et. al., "Handbook of Floating-Point Arithmetic 2nd ed.". Birkhäuser 2018


Nico M. Temme, "Special Functions. An Introduction to the Classical Functions of Mathematical Physics". New York, NY: Wiley 1996

Amparo Gil, Javier Segura, and Nico M. Temme, "Numerical Methods for Special Functions". SIAM 2007


Frank W. J. Olver, "Asymptotics and Special Functions". Natick, MA: A K Peters 1997

Jet Wimp, "Computation with Recurrence Relations". Boston, MA: Pitman 1984


A. N. Khovanskii, "The application of continued fractions and their generalizations to problems in approximation theory". Groningen (Netherlands): Noordhoff 1963

A. Cuyt, et. al., "Handbook of Continued Fractions for Special Functions". Springer 2008