If these two expressions for calculating the prime counting function are equal, why doesn't this work?

So I've seen some different explanations of how the zeros of the zeta function can predict the prime counting function. The common example is that

$$\pi(x)=\sum_{n=1}^\infty \frac{\mu(n)}{n}J(x^{1/n})$$

where $\mu(n)$ is the möbius function and

$$J(x)=Li(x)+\sum_{\rho}Li(x^{\rho})-\ln(2)+\int_{x}^\infty\frac{1}{t(t^2-1)\ln(t)}dt$$

For future ease let's call

$$m(x)=-\ln(2)+\int_{x}^\infty\frac{1}{t(t^2-1)\ln(t)}dt$$

I've also seen that

$$\pi(x)=R(x)-\sum_{\rho}R(x^{\rho})-\sum_{n=1}^\infty R(x^{-2n})$$

The sums over the $\rho$ are for the complex nontrivial zeroes and the last term above is accounting for the trivial zeros and

$$R(x)=\sum_{n=1}^\infty\frac{\mu(n)}{n}Li(x^{1/n})$$

My thought process was that I have two expressions for $\pi(x)$. They must simply be different forms of the same thing, otherwise I could equate them and get something new. So I equated them to see what would happen. (On a side note, I'm not really sure where the offset logarithmic integral is defined and where it is not. I initially assumed they were all the offset Li, but I'm not sure if that's accurate so please correct me if that's' wrong or if it even matters. Anyway let's take the first form of $\pi(x)$ and convert it a bit.

\begin{align}\pi(x)&=\sum_{n=1}^\infty\frac{\mu(n)}{n}J(x^{1/n})\\&=\sum_{n=1}^\infty\frac{\mu(n)}{n}Li(x^{1/n})+\sum_{n=1}^\infty\frac{\mu(n)}{n}\sum_{\rho}Li(x^{\rho/n})+\sum_{n=1}^\infty\frac{\mu(n)}{n}m(x^{1/n})\\&=\sum_{n=1}^\infty\frac{\mu(n)}{n}Li(x^{1/n})+\sum_{\rho}\sum_{n=1}^\infty\frac{\mu(n)}{n}Li(x^{\rho/n})+\sum_{n=1}^\infty\frac{\mu(n)}{n}m(x^{1/n})\\&=R(x)+\sum_{\rho}R(x^\rho)+\sum_{n=1}^\infty\frac{\mu(n)}{n}m(x^{1/n})\end{align}

We know from the other form that $\pi(x)$ also equals

$$R(x)-\sum_{\rho}R(x^\rho)-\sum_{n=1}^\infty R(x^{-2n})$$

Immediately the fact that one expression has a positive sum over the zeros and the other has a negative sum over the zeros hints to me that something's not right. If we continue on, equating the two then substituting back to the $\pi(x)$ equation, we would get

$$\sum_{\rho}R(x^{\rho})=-\frac{1}{2}\left(\sum_{n=1}^\infty\frac{\mu(n)}{n}m(x^{1/n})+\sum_{n=1}^\infty R(x^{-2n})\right)$$

And from that, we get to

$$\pi(x)=R(x)+\frac{1}{2}\sum_{n=1}^\infty\left(\frac{\mu(n)}{n}m(x^{1/n})-R(x^{-2n})\right)$$

Interestingly, because

$$\sum_{n=1}^\infty\frac{\mu(n)}{n}m(x^{1/n})=-\sum_{n=1}^\infty R(x^{-2n})$$

We circle back around, attaining only the approximation

$$\pi(x)=R(x)-\sum_{n=1}^\infty R(x^{-2n})$$

Thus the result is seemingly trivial, however all of my steps seemed legitimate. What is wrong here?


The short answer was already given "The $+$ sign for $\sum_{\rho}Li(x^{\rho})$ in your definition of $J(x)$ is wrong" so let's use this opportunity to :

  • provide a glimpe of the derivation of the explicit formulas (a really fascinating subject after all!) and
  • consider the next traps in this game...

(what follows is from a sketch of the derivation of $\;\pi^*(x)=R(x)-\sum_{\rho} R(x^{\rho})\,$ in this answer,
see Edwards' excellent "Riemann's Zeta Function" for detailed proofs.
As usual $\,s\in\mathbb{C}\,$ will be written as $\;s:=\sigma+it\;$ and every $\,p$ supposed prime) $$-$$ Let's start with the famous Euler product :

$$\tag{1}\displaystyle\zeta(s)=\prod_{p\ \text{prime}}\frac 1{1-p^{-s}}\quad\text{for}\ \ \Re(s)=\sigma>1$$

When we apply Perron's formula to the derivative of the logarithm of $\zeta(s)$ (considered as a Dirichlet series) we get for $\,c>1$ and $\,x$ any positive real value : $$\tag{2}-\frac1{2\pi i}\int_{c-i\infty}^{c+i\infty}\frac{\zeta'(s)}{\zeta(s)}\frac{x^s}s\,ds=\sum_{p^k\le x}^{*}\log(p)=\sum_{n\le x}^{*}\Lambda(n)=:\psi^*(x)$$

Where $\psi$ is the second Chebyshev function that was proved equivalent to $x$ (as $\,x\to\infty$) by the P.N.T (and $\psi^*$ its slight variation where the values of $\psi(x)$ at the discontinuities are replaced by the mean value of their limit at the left and right).

From the poles of the integrand in $(2)$ (supposing $x\ge c\;$ i.e. $\;x>1$) we obtain the residues at $\displaystyle s=0\mapsto -\frac{\zeta'(0)}{\zeta(0)}=-\log(2\pi),\ 1\mapsto x^1,\ \rho\mapsto -\frac{x^\rho}{\rho}$ for $\rho$ any zero of $\zeta(s)$
(I don't distinguish the trivial and non trivial zeros at this point...).

This gives us directly the first explicit (von Mangoldt) formula: $$\tag{3}\boxed{\displaystyle\psi^*(x)=-\log(2\pi)+x-\sum_{\rho} \frac {x^{\rho}}{\rho}}\quad(x>1)$$ (for $x<2$ we must have $\,\psi^*(x)=0\,$ from its definition $(2)$)

(more details by M. Watkins and an animation with increasing number of non trivial zeros)

The next step is to integrate by parts the derivative of $\,\psi^*(t)$ divided by $\log(t)$ (see here) to get the Riemann prime-counting function $\Pi^*$ (noted too $\Pi_0$ or $J_0$ or $J$ or $f$ by Riemann...) : $$\tag{4}\int_0^x\frac{\psi^{*\,'}(t)\ dt}{\log\,t}=\int_2^x\frac{\psi^{*\,'}(t)\ dt}{\log\,t}=\sum_{n\le x}^{*}\frac {\Lambda(n)}{\log\,n}=\sum_{p^k\le x}^{*}\frac 1k=:\Pi^*(x)$$ (more rigorous derivations are needed here. Edwards' exposition of van Mangoldt's proof and Landau's 1908 paper "Nouvelle démonstration pour la formule de Riemann..." may help)

Let's define the logarithmic integral by $\ \displaystyle\operatorname{li}(x):=P.V.\int_0^x \frac{dt}{\log\,t}\,$ then we may combine $(4)$ and the integrals of $\ \displaystyle\operatorname{li}(t^{\rho})'=\frac{t^{\rho-1}}{\log\,t}\;$ from $0$ to $x\,$ to get Riemann's explicit formula for $x > 2$ :

$$\tag{5}\boxed{\displaystyle\Pi^*(x)=\operatorname{li}(x)-\sum_{\rho} \operatorname{li}(x^{\rho})}\quad(x>2)$$ (for $x<2$ we must have $\;\Pi^*(x)=0\,$ from its definition $(4)\;$)

To deduce formally (I don't know a convergence proof) the last formula let's revert $\;\displaystyle\Pi^*(x)=\sum_{p^k\le x}^{*}\frac 1k=\sum_{k>0} \frac{\pi^{*}\bigl(x^{1/k}\bigr)}k\;$ using the Möbius inversion formula $\ \displaystyle\pi^{*}(x)=\sum_{n=1}^{\infty} \frac{\mu(k)}k \Pi^*\bigl(x^{1/k}\bigr)\;$ to get : $$\tag{6}\boxed{\displaystyle\pi^*(x)=R(x)-\sum_{\rho} R(x^{\rho})},\quad(x>1)$$

We started $(1)$ with a function 'encoding' the primes $\,\zeta\,$ and end with the primes 'counted' using $\zeta$'s zeros $(6)$ !

(Matthew Watkins' "encoding"of the distribution of prime numbers by the nontrivial zeros with an animation) $$-$$ Your steps were thus formally right and the confusion came only from the typo in the $J(x)$ formula. In your final expression corresponding to $(6)$ you considered too correctly all the zeros and not only the nontrivial ones.

Should you wish to evaluate $\operatorname{R}(x)$ using the Gram series : $\;\displaystyle\operatorname{R}(x^{\rho})=1+\sum_{m=1}^\infty \frac{(\rho\log(x))^m}{m!m \zeta(m+1)}\;$ then take care to precision (the terms become rather large before decreasing again).

You may use the fact that $\Pi^*(x)=0$ for $x<2$ to reduce the evaluation of $\ \displaystyle\pi^{*}(x)=\sum_{n=1}^{\infty} \frac{\mu(k)}k \Pi^*\bigl(x^{1/k}\bigr)\;$ and the individual $\,\operatorname{R}(x^{\rho})$ terms in $(5)$ to $\left\lceil\dfrac{\log x}{\log 2}\right\rceil$ terms (I'll have to reverify this part).

Concerning the offset logarithmic integral $\operatorname{Li}$ versus the "standard definition $\operatorname{li}$" used I think that the integral defined by Riemann should start at $0$ for a proper transition from $(4)$ to $(5)$.
This matters of course in $(5)$ and your expression for $J$ (from the $\,\operatorname{li}(2)\approx 1.045$ difference for any of the infinite terms) but doesn't matter in the final $(6)$ since $\;\displaystyle\sum_{n=1}^{\infty} \frac{\mu(n)}n=0$ which was proved by Landau (this appears formally as the limit of $\,\displaystyle\frac 1{\zeta(s)}=\sum_{n=1}^{\infty} \frac{\mu(n)}{n^s}\;$ as $s\to 1$ but the convergence proof is, according to Hardy, as deep as the PNT).

Now that we have the correct bound at $0$ for $\,\operatorname{li}\,$ and the correct sign in your expression for $J(x)$ can we use it for numerical evaluation? In fact No because for $x>1$ the phase of $x^{\rho}$ is important (we are integrating the reverted logarithm of $t$ up to $x^{\rho}$ but once evaluated $x^{\rho}$ can't be distinguished from $x^{\rho+2k\pi i/\ln x}$!).
Fortunately we can replace $\operatorname{li}$ by the exponential integral $\operatorname{Ei}$ using $\operatorname{Li}(x)=\operatorname{Ei}(\log\;x)$ and will obtain the correct results by replacing $\operatorname{li}(x^{\rho})$ with $\operatorname{Ei}(\rho\,\log\;x)$.
Further there is a neat continued fraction allowing easy evaluation of $\,\operatorname{Ei}(\sigma+it)\,$ for $\sigma$ small and $t$ large (use the c.f. of $\operatorname{E1}(-s)$ and $\,\operatorname{Ei}(s)=-\operatorname{E1}(-s)+\pi i$ for $\Im(s)>0$).

If you prefer to invoke directly the $\,\operatorname{Ei}$ function of some software you should be warned that some of them note Ei what in reality is the E1 function (see A&S $5.1.7$ for a conversion, A&S $5.1.22$ for the continued fraction of $\,\operatorname{E1}$ ($n=1$), approximations and graphics)

Result for $(6)$ using the Gram series and the $100$ first non trivial zeros in $(2..100)$ : $x\in (2..100)$