Relation between Meissel–Mertens constant and Euler–Mascheroni constant

Solution 1:

Suppose we have a Dirichlet series

$$F(s) = \sum_{n = 1}^{\infty} \frac{a_n}{n^s}$$

that converges for $\operatorname{Re} s > 0$, and for the partial sums of the coefficients series we have

$$A(x) := \sum_{n \leqslant x} a_n = \varphi(x) + L + R(x)$$

with a "nice" smooth function $\varphi$ and $R(x) \to 0$. We can then split the integral representation

$$F(s) = s\int_1^{\infty} \frac{A(x)}{x^{1+s}}\,dx$$

into (hopefully) easier to analyse parts, namely

$$F(s) = s\int_1^{\infty} \frac{\varphi(x)}{x^{1+s}}\,dx + L + s\int_1^{\infty} \frac{R(x)}{x^{1+s}}\,dx\,.$$

We note that the last term tends to $0$ as $s \to 0$ inside an angular sector $\lvert \arg s\rvert \leqslant \alpha < \frac{\pi}{2}$, and so the limiting behaviour of $F(s)$ is determined by the first term and $L$. For, given an arbitrary $\delta > 0$, there is an $y \in [1,\infty)$ such that $\lvert R(x)\rvert \leqslant \delta$ for $x \geqslant y$, and splitting the integral at $y$ we observe that

$$E(s) := \int_1^y \frac{R(x)}{x^{1+s}}\,dx$$

is an entire function of $s$ (here we need that $\varphi$ is also nice enough near $1$ since $R$ would reflect any ugliness of $\varphi$ there; but if it weren't, we'd need to modify $\varphi$ to be able to split anyway) and hence $s\cdot E(s) \to 0$ as $s \to 0$. The remaining part can be bounded

$$\Biggl\lvert s\int_y^{\infty} \frac{R(x)}{x^{1+s}}\,dx\Biggr\rvert \leqslant \delta \lvert s\rvert \int_y^{\infty} \frac{dx}{x^{1+\sigma}} = \delta \frac{\lvert s\rvert}{\sigma y^{\sigma}} \leqslant \delta \frac{\lvert s\rvert}{\sigma}$$

where $\sigma = \operatorname{Re} s > 0$. If $\lvert \arg s\rvert \leqslant \alpha < \frac{\pi}{2}$, the quantity $\frac{\lvert s\rvert}{\sigma}$ is bounded by $\frac{1}{\cos \alpha}$, and so

$$\limsup_{\substack{s \to 0 \\ \lvert \arg s\rvert \leqslant \alpha}} \: \Biggl\lvert s\int_1^{\infty} \frac{R(x)}{x^{1+s}}\,dx\Biggr\rvert \leqslant \frac{\delta}{\cos\alpha}\,.$$

Since $\delta$ was arbitrary,

$$\lim_{\substack{s \to 0 \\ \lvert \arg s\rvert \leqslant \alpha}} s\int_1^{\infty} \frac{R(x)}{x^{1+s}}\,dx = 0$$

follows.

For $\varphi(x) = \log \log x$, we can explicitly evaluate the first integral, and thus can interpret the limiting behaviour. To look at how the decomposition of $M$ arises, we let $a_n = \frac{1}{n}$ if $n$ is prime and $a_n = 0$ otherwise (in closed form, $a_n = \frac{\pi(n) - \pi(n-1)}{n}$). Then $F(s) = P(1+s)$ where $P$ is the prime zeta function. We temporarily restrict $s$ to positive real numbers, so that the following manipulations allow a more elementary justification. Then

\begin{align} s\int_1^{\infty} \frac{\log \log x}{x^{1+s}}\,dx &= s\int_0^{\infty} \frac{\log t}{e^{(1+s)t}}e^t\,dt \tag{$x = e^t$} \\ &= s\int_0^{\infty} (\log t) e^{-st}\,dt \\ &= \int_0^{\infty} (\log u - \log s)e^{-u}\,du \tag{$u = st$} \\ &= \int_0^{\infty} (\log u)u^{1-1}e^{-u}\,du - \log s \int_0^{\infty} u^{1-1}e^{-u}\,du \\ &= \Gamma'(1) - (\log s)\Gamma(1) \\ &= \log \frac{1}{s} - \gamma\,. \end{align}

By the identity theorem this holds for $\operatorname{Re} s > 0$, and we have found

$$P(1+s) = \log \frac{1}{s} + (M - \gamma) + s\int_1^{\infty} \frac{R(x)}{x^{1+s}}\,dx\,,$$

where the last term tends to $0$ as $s\to 0$ in an angular sector. Thus $\gamma$ comes from $\Gamma'(1)$, not (directly) from the Laurent expansion of $\zeta$ about $1$. It occurs naturally whenever we have a coefficient sequence such that $\lim \bigl(A(x) - \log \log x\bigr)$ exists. To understand $M - \gamma$, we can compare the prime zeta function to another function with a logarithmic singularity at $1$. Since $\zeta$ has a simple pole at $1$ and its logarithm is easy to handle due to the Euler product, this is a natural choice. Writing

$$\zeta(s) = \frac{1}{s-1} + c_0 + c_1(s-1) + \dotsc = \frac{1 + c_0(s-1) + \dotsc}{s-1} = \frac{G(s)}{s-1}$$

where $G$ is an entire function with $G(1) = 1$ which has no zeros in the half-plane $\operatorname{Re} s > 1$, we find

$$\log \zeta(s) = \log\frac{1}{s-1} + \log G(s)$$

for $\operatorname{Re} s > 1$, or, translating by $1$,

$$\log \zeta(1+s) = \log \frac{1}{s} + \log G(1+s)$$

for $\operatorname{Re} s > 0$, where $\log G(1+s)$ is holomorphic in the right half-plane and on a neighbourhood of $0$, and tends to $0$ as $s \to 0$. It follows that

$$M - \gamma = \lim_{\substack{s \to 0 \\ \lvert \arg s\rvert \leqslant \alpha}} \bigl(P(1+s) - \log \zeta(1+s)\bigr)\,.$$

But from the Euler product we obtain (using the principal branch of the logarithm for each factor)

$$\log \zeta(1+s) = - \sum_p \log \biggl(1 - \frac{1}{p^{1+s}}\biggr)$$

and hence

$$P(1+s) - \log \zeta(1+s) = \sum_p \frac{1}{p^{1+s}} + \sum_p \log \biggl(1 - \frac{1}{p^{1+s}}\biggr) = \sum_p \Biggl(\frac{1}{p^{1+s}} + \log \biggl(1 - \frac{1}{p^{1+s}}\biggr)\Biggr)\,.$$

A Taylor expansion of the logarithms on the right shows that the series converges (absolutely) for $\operatorname{Re} s > -\frac{1}{2}$, so

$$M-\gamma = \lim_{s\to 0} \bigl(P(1+s) - \log \zeta(1+s)\bigr) = \sum_p \Biggl(\frac{1}{p} + \log \biggl(1 - \frac{1}{p}\biggr)\Biggr)\,.$$

A faster converging series for $M - \gamma$ is

$$M - \gamma = \sum_{m = 2}^{\infty} \frac{\mu(m)\log \zeta(m)}{m}\,.$$

Note that the above assumes the existence of

$$\lim_{x \to \infty} \:\Biggl(\sum_{p \leqslant x} \frac{1}{p} - \log \log x\Biggr)\,.$$

To prove the existence, write

$$\sum_{p \leqslant x} \frac{1}{p^{1+\varepsilon}} = \log \zeta(1+\varepsilon) - \sum_{p > x} \frac{1}{p^{1+\varepsilon}} + \bigl(P(1+\varepsilon) - \log \zeta(1+\varepsilon)\bigr)$$

for $\varepsilon > 0$ and apply summation by parts to

$$\sum_{p > x} \frac{1}{p^{1+\varepsilon}} = \sum_{p > x} \frac{\log p}{p}\cdot \frac{1}{p^{\varepsilon}\log p}\,,$$

using Mertens' first theorem

$$\Biggl\lvert \sum_{p \leqslant x} \frac{\log p}{p} - \log x\Biggr\rvert \leqslant 2\,.$$

Then let $\varepsilon \to 0$.