Can this function be rewritten to improve numerical stability?

I'm writing a program that needs to evaluate the function $$f(x) = \frac{1 - e^{-ux}}{u}$$ often with small values of $u$ (i.e. $u \ll x$). In the limit $u \to 0$ we have $f(x) = x$ using L'Hôpital's rule, so the function is well-behaved and non-singular in this limit, but evaluating this in the straightforward way using floating-point arithmetic causes problems when $u$ is small: then the exponential is quite close to 1, and when we subtract it from 1, the difference doesn't have great precision. Is there a way to rewrite this function so it can be computed more accurately both in the small-$u$ limit and for larger values of $u$ (i.e. $u \approx x$)?

Of course, I could add a test to my program that switches to computing just $f(x) = x$, or perhaps $f(x) = x - ux^2/2$ (second-order approximation) when $u$ is smaller than some threshold. But I'm curious to see if there's a way to do this without introducing a test.


We once had this (or at least something very similar) in my first numerical analysis class:

There it was proposed to calculate $f$ in terms of $y = \exp(-ux)$ instead. I.e. first set $y$ and then calculate $$f(x) = \frac{1-e^{-ux}}{u} = -x\frac{1-y}{\log(y)}$$

I have never understood why this should have any effect on the calculation, however... (It may only have an effect, if we make some assumptions on how the values get calculated inside the computer).

Edit: J.M. gave a very nice link in the comments, where it is explained why the above works!

If you make some numerical tests, let me know what you get (I'd be interested whether this actually does what they claimed it would)! ;)


The reason you get error is due to the machine precision of floating point arithmetic.

What is happening is not so much that $1-e^{ux}$ has issues with machine precision, but rather that you are dividing by something that is quite small, as well.

Essentially, if $e^{ux}$ is less than approximately $2.2 \times 10^{-16}$ (the machine $\varepsilon$) for double-precision arithmetic, then the total quantity $1-e^{ux}$ will return exactly 1. However, this is not so much an issue... 1 times something is very nearly the same thing as 0.99999999999999... times something.

The problem is then dividing by a very small number: $1/u$. If your $u < 2.2\times 10^{-16}$, then $1/u$ is very large, and the numerator is unable to "cancel" this effect. Furthermore, and I don't recall what the OFL limit for doubles are off the top of my head, if $1/u$ is larger than something like $2^{32}$, you're out of luck anyways.

One thing that you can do is compute $\log f(x)$ first: $\log f(x) = \log(1-e^{ux})-\log(u)$. You will still have the machine epsilon issue (ie, small values of $u$ will essentially make the log return $0$, but this is going to be nearly true for values in a neighborhood anyways), but your denominator will be more manageable.

Edit: Of course, one other method is to use the trusty ol' Taylor expansion for $e^{ut}$. Then, you have $f(x) = \frac{1}{u}+\sum_i \frac{(ux)^i}{i!u}$. The numerator is small, and the denominator is small, so they will to some degree cancel each other out. Compute each term in the sum independently, store them, and then add.


As J.M. pointed out in a comment to another answer, some platforms and programming languages have a primitive that computes $2^x-1$ or $e^x-1$ in a single step with without precision loss for small $x$. For example, in C99 <math.h> there is the expm1() function, which ought to map to the F2XM1 instruction on the x86 platform (via a scaling of $x$ by $\frac{1}{\log2}$).