Another interesting integral related to the Omega constant

Another interesting integral related to the Omega constant is the following $$\int^\infty_0 \frac{1 + 2\cos x + x \sin x}{1 + 2x \sin x + x^2} dx = \frac{\pi}{1 + \Omega}.$$ Here $\Omega = {\rm W}_0(1) = 0.56714329\ldots$ is the Omega constant while ${\rm W}_0(x)$ is the principal branch of the Lambert W function. A similar interesting integral can be found here.

When attempting to check the above answer numerically, Mathematica 9.0 gives $2.0101 \pm 0.0005$ (with a complaint about failure to converge to prescribed accuracy after 9 recursive bisections...) compared to the exact answer of $2.004662032\ldots$

Plotting the integrand two observations can be made: (i) it tends to $\infty$ as $x \rightarrow 0^+$, and (ii) for $x$ greater than about 3 it is highly oscillatory with the amplitude of the oscillations tending to zero as $x \rightarrow \infty$.

Given this, what would be the best way to evaluate the integral numerically if an accuracy of say $10^{-9}$ is needed? This accuracy should be achieved in a "reasonable" amount of time.


Solution 1:

Denote $$F(x) = \frac{1 + 2\cos x + x \sin x}{1 + 2x \sin x + x^2}.$$ Our aim is to compute $$I = \int_0^\infty F(x) ~ dx.$$ Using numerical quadrature specifically designed for oscillatory functions – quadosc, part of mpmath, in Python 2.7 – I get 50 decimal digits of accuracy in ~34 seconds:

>>> from mpmath import *
>>> import time
>>> mp.dps = 50
>>> mp.pretty = True
>>> F = lambda x: (1+2*cos(x)+x*sin(x))/(1+2*x*sin(x)+x*x)
>>> t=time.time(); I=quadosc(F, [0,inf], period=2*pi); elapsed=time.time()-t;
>>> I
2.0046620323839787916115494152293765801902060137759
>>> elapsed
33.79676699638367

Compare with WolframAlpha, or with mpmath:

>>> +pi/(1 + lambertw(1, 0))
2.0046620323839787916115494152293765801902060137759

What's the underlying math here? In Section 1, I'll write my understanding of the quadosc function, as it applies to a function $$f(x) = g(x) \cos(\omega x + \phi),$$ where $g(x)$ is “slowly decaying.” In Section 2, I'll outline some of the important ideas / give a non-rigorous explanation of how quadosc works for $F(x)$—the “tl;dr” version is that $F(x)$ decays asymptotically and its zeros “asymptotically agree” with $\sin x$.

1. quadosc for product of slowly-decaying function and sine wave

Suppose $f(x) = g(x) \cos(\omega x + \phi)$ for some slowly-decaying function $g(x)$. Without loss of generality, suppose $\phi=0$. In this case, quadosc will break the integral $$I = \int_0^\infty f(x) ~ dx$$ into half-periods: $$I = \sum_{k = 0}^\infty \int_{k(2\pi/\omega)/2}^{(k+1)(2\pi/\omega)/2} f(x) ~ dx. \tag{*}$$ In other words, it will compute the integral on $[0,\pi/\omega]$, $[\pi/\omega,2\pi/\omega]$, $[2\pi/\omega,3\pi/\omega]$, $[3\pi/\omega,4\pi/\omega]$, etc. (From the source code, it looks like quadosc uses Gauss–Legendre quadrature on each subinterval.)

The resulting series (*) is an alternating series! Therefore, nsum – the mpmath function for numerically approximating infinite series – is called. According to its documentation, by default, nsum attemps to compute the sum two different ways – Richardson extrapolation, or Shanks transfomation, which are both sequence/series acceleration methods.

Presumably both of these components – numerical integration on each subinterval, and numerical series summation – are done until the desired digits of precision (mp.dps = 50, where dps = decimal digits of precision) are satisfied.

2. quadosc for $F(x)$

Why does quadosc work for $\int_0^\infty F(x) ~ dx$? Because $F(x)$ asymptotically decays, and because its zeros “asymptotically agree” with $\sin(x)$, in the sense that “the positive roots $(x_n)_{n=1}^\infty$ of $F(x)$ (where $x_n < x_{n+1}$) satisfy $\lim_{n\to\infty} (n\pi - x_n) = 0$.

How? I'll make two claims, and outline the important ideas that (I predict) could be used to derive a rigorous proof. (1) The roots of $1+2\cos x + x\sin x = 0$ “asymptotically agree” with the roots of $1 + x\sin x = 0$. (2) The roots of $1+ x \sin x = 0$ “asymptotically agree” with the roots of $\sin x = 0$

Important ideas for (1): Invoking the asymptotic series of $\sqrt{4+x^2}$ and $\arctan(x/2)$ – computed with Wolfram Alpha [1], [2] – we have \begin{align*} \sqrt{4+x^2} &\sim x + \frac2x - \frac{2}{x^3} + \frac{4}{x^5} + O(x^{-7}) \\ \arctan\frac x2 &\sim \frac\pi2 - \frac2x + \frac{8}{3x^3} - \frac{32}{5x^5} + O(x^{-7}) \end{align*} Therefore, the numerator of $F(x)$ – which determines the roots of $F(x)$ – satisfies \begin{align*} 1+2\cos(x) + x\sin(x) &= 1 + \sqrt{4+x^2}\cos\left(x - \arctan\frac{x}{2}\right) \\ &\sim 1 + \left(x + \frac2x + O(x^{-3})\right) \cos\left( x - \frac\pi2 + \frac2x + O(x^{-3}) \right). \end{align*} Dropping the $x^{-1}$ and higher-order terms, and recalling $\cos(x - \frac\pi2) = \sin x$, $$ 1+2\cos(x) + x\sin(x) \sim 1 + x\sin(x).$$ As $x\to\infty$, the difference between $1+2\cos(x) + x\sin(x)$ and $1 + x\sin(x)$ will vanish, which implies that their roots will asymptotically agree, as claimed.

Important ideas for (2): Note that $$1 + x\sin x = 0 \iff \frac1x + \sin x = 0 \quad (\text{assuming } x\not=0).$$ Clearly the difference between $\frac1x + \sin x$ and $\sin x$ will vanish in the limit $x\to\infty$, which implies that their roots will asymptotically agree, as claimed.