Lambert function approximation $W_0$ branch

I am looking for a simple, inexpensive and very accurate approximation of the Lambert function ($W_0$ branch) ($-1/e < x < 0$).


Solution 1:

Here is a post that I made on sci.math a while ago, regarding a method I have used.

Analysis of $we^w$

For $w\gt0$, $we^w$ increases monotonically from $0$ to $\infty$. When $w\lt0$, $we^w$ is negative.

Thus, for $x\gt0$, $\mathrm{W}(x)$ is positive and well-defined and increases monotonically.

For $w\lt0$, $we^w$ reaches a minimum of $-1/e$ at $w=-1$. On $(-1,0)$, $w e^w$ increases monotonically from $-1/e$ to $0$. On $(-\infty,-1)$, $w e^w$ decreases monotonically from $0$ to $-1/e$. Thus, on $(-1/e,0)$, $\mathrm{W}(x)$ can have one of two values, one in $(-1,0)$ and another in $(-\infty,-1)$. The value in $(-1,0)$ is called the principal value.

The iteration

Using Newton's method to solve $we^w$ yields the following iterative step for finding $\mathrm{W}(x)$: $$ w_{\text{new}}=\frac{xe^{-w}+w^2}{w+1} $$

Initial values of $w$

For the principal value, when $-1/e\le x\lt0$, and when $0\le x\le10$, use $w=0$. When $x\gt10$, use $w=\log(x)-\log(\log(x))$.

For the non-principal value, when $x$ is in $[-1/e,-.1]$, use $w=-2$; and if $x$ is in $(-.1,0)$, use $w=\log(-x)-\log(-\log(-x))$.

This says that for the branch you want, use the iteration with an initial value of $w=0$.

Solution 2:

Corless, et al. 1996 is a good reference for implementing the Lambert $W$. One key to being efficient is a good initial guess for the iterative refinement. This is where series approximations are useful. For $(-1/\text{e}, 0)$ it's helpful to use two different expansions: about $0$ and about the branch point $-1/\text{e}$. Otherwise you can potentially waste a lot of time iterating.

Here's some unvectorized (for clarity) Matlab code for $W_0$. I've indicated the equation numbers from Corless, et al. on which various lines are based.

function w = fastW0(x)
% Lambert W function, upper branch, W_0
% Only valid for real-valued scalar x in [-1/e, 0]

if x > -exp(-sqrt(2))
    % Series about 0, Eq. 3.1
    w = ((((125*x-64)*x+36)*x/24-1)*x+1)*x;
else
    % Series about branch point, -1/e, Eq. 4.22
    p = sqrt(2*(exp(1)*x+1));
    w = p*(1-(1/3)*p*(1-(11/24)*p*(1-(86/165)*p*(1-(769/1376)*p))))-1;
end

tol = eps(class(x));
maxiter = 4;
for i = 1:maxiter
    ew = exp(w);
    we = w*ew;
    r = we-x; % Residual, Eq. 5.2
    if abs(r) < tol
        return;
    end

    % Halley's method, for Lambert W from Corless, et al. 1996, Eq. 5.9
    w = w-r/(we+ew-0.5*r*(w+2)/(w+1));
end

Here's a plot (for double precision, tolerance of machine epsilon) of number of the iterations until convergence if using no initial series guess, the series about $0$, the series about the branch point, or both series:

iterations vs. x

Of course the evaluation of the series approximation has a cost relative to performing an iteration. One can adjust how many terms are used in order to tune this.

Solution 3:

An iterative procedure, given in wikipedia, q&d-translated to Pari/GP, which suits my needs well:

 LW(x, prec=1E-80, maxiters=200) = local(w, we, w1e); 
   w=0;    
   for(i=1,maxiters,  
           we=w*exp(w);
           w1e=(w+1)*exp(w); 
           if(prec>abs((x-we)/w1e),return(w));
           w = w-(we-x)/(w1e-(w+2)*(we-x)/(2*w+2)) ;
        );
    print("W doesn't converge fast enough for ",x,"( we=",we);
    return(0);

Example:

 default(precision,200)
   x = -0.99999/exp(1)
 %382 = -0.367875762377

   y=LW(x)
 %383 = -0.995534517079

  [chk=y*exp(y);x;chk-x]
 %384 = 
[-0.367875762377]
[-0.367875762377]
[2.49762470622 E-163]

Solution 4:

Maple produces $$\operatorname{Lambert W}(x)=x-{x}^{2}+{\frac {3}{2}}{x}^{3}-{\frac {8}{3}}{x}^{4}+{\frac {125}{24 }}{x}^{5}+O \left( {x}^{6} \right), x\to 0.$$ Let us compare $\operatorname{Lambert W}(-.2)=-.2591711018$ with $-0.2-(-0.2)^{2}+3/2(-0.2)^{3}-8/3(-0.2)^{4}+{\frac {125}{24}}(-0.2)^{5}= -.2579333334$.

Solution 5:

For the range $-\frac 1e \leq x \leq 0$, using $y=\sqrt{2(1+ex)}$ (as in this post) and building Padé approximants $P_n$ around $y=0$, what is obtained is $$P_1=\frac{-1+\frac{2 y}{3} } {1+\frac{y}{3} }$$ $$P_2=\frac{-1+\frac{14 y}{45}+\frac{301 y^2}{1080} } {1+\frac{31 y}{45}+\frac{83 y^2}{1080} }$$ $$P_3=\frac{-1-\frac{974 y}{22659}+\frac{21865 y^2}{51792}+\frac{89131 y^3}{1165320} } {1+\frac{23633 y}{22659}+\frac{104225 y^2}{362544}+\frac{3167 y^3}{196560} }$$ $$P_4=\frac{-1-\frac{11637254 y}{29330279}+\frac{463636649 y^2}{1055890044}+\frac{23930361857 y^3}{110868454620}+\frac{192684057311 y^4}{10643371643520} } {1+\frac{40967533 y}{29330279}+\frac{659231191 y^2}{1055890044}+\frac{1928737771 y^3}{20157900840}+\frac{34384971553 y^4}{10643371643520} }$$ The maximum errors occur for $x=0$; for the above formulae, they are $\Delta_1=-3.88\times 10^{-2}$, $\Delta_2=-1.23\times 10^{-3}$, $\Delta_3=-3.73\times 10^{-5}$ and $\Delta_4=-1.12\times 10^{-6}$.

Concerning $$S_i=\int_{-\frac 1e}^0 \left(W(x)-P_i\right)^2\,dx$$ they are $S_1=3.93\times 10^{-4}$, $S_2=2.57\times 10^{-7}$, $S_3=1.76\times 10^{-10}$ and $S_4=1.26\times 10^{-13}$.

Edit

For a "sanity" check, I also considered all the possible $[k,6-k]$ Padé approximants and obtained the following results $$\left( \begin{array}{ccc} k & \Delta_k & S_k \\ 0 & -0.108806 & 1.84 \times 10^{-3} \\ 1 & -0.007522 & 6.16 \times 10^{-6} \\ 2 & -0.000171 & 3.49 \times 10^{-9}\\ \color{red} { 3} & \color{red} {-0.000037} & \color{red} {1.76 \times 10^{-10}}\\ 4 & -0.000082 & 8.17 \times 10^{-10}\\ 5 & -0.000966 & 1.05 \times 10^{-7}\\ 6 & -0.094985 & 9.04 \times 10^{-4} \end{array} \right)$$