Does the correctness of Riemann's Hypothesis imply a better bound on $\sum \limits_{p<x}p^{-s}$?

The key to the proof in my other answer was the quantitative prime number theorem $$\pi(x)=\text{li}(x)+O\left(xe^{-c\sqrt{\log x}}\right),\ \ \ \ \ \ \ \ \ \ (0)$$ along with partial summation. Because we can use partial summation, all that really matters is the case $s=0$, and this case, which is looking at $\pi(x)$, tells us about everything else. The Riemann Hypothesis implies that $$\pi(x)=\text{li}(x)+O\left(x^{\frac{1}{2}}\log x\right),\ \ \ \ \ \ \ \ \ \ \ \ (1)$$ and we will look at why this is true later on. For now, lets look at the consequence, and what happens to the sum $\sum_{p\leq x}p^{-s}$. Going back to the other proof, the error term was just $$t^{-s}\left(\pi(t)-\text{li}(t)\right)\biggr|_{2}^{x}+s\int_{2}^{x}t^{-s-1}\left(\pi(t)-\text{li}(t)\right)dt$$ which after substituting $(1)$ becomes $$O\left(x^{-\text{Re}(s)+\frac{1}{2}}\log x+|s|\int_{2}^{x}t^{-\text{Re}(s)-\frac{1}{2}}\log tdt\right).$$ The integral is then $$\ll\frac{|s|}{|\text{Re}(s)-\frac{1}{2}|}x^{-\text{Re}(s)+\frac{1}{2}}\log x$$ so that for $\text{Re}(s)\neq\frac{1}{2}$, $\text{Re}(s)<1$, $$\sum_{p\leq x}p^{-s}=\text{li}\left(x^{1-s}\right)+O\left(\frac{|s|}{|\text{Re}(s)-\frac{1}{2}|}x^{-\text{Re}(s)+\frac{1}{2}}\log x\right).$$ The cases, $\text{Re}(s)=\frac{1}{2}$ and $\text{Re}(s)=1$ are special and must be dealt with separately. For example $$\sum_{p\leq x}p^{-\frac{1}{2}+i\gamma}=\text{li}\left(x^{\frac{1}{2}-i\gamma}\right)+O\left(|\gamma|\log^{2}x\right).$$ (We do not consider $\text{Re}s>1$, since the series converges absolutely there.) Notice that if I choose $\epsilon>0$ we can actually remove the denominator concerning $|s-\frac{1}{2}|$. This is done by looking at the two cases, and then taking minimums so the error depends only on $\epsilon$. In particular $$\sum_{p\leq x}p^{-s}=\text{li}\left(x^{1-s}\right)+O_\epsilon\left(|s|x^{-\text{Re}(s)+\frac{1}{2}+\epsilon}\right).$$

Remark: I realized that in my last post I might of been a bit careless about complex $s$. Some real parts need to be put in for the bounding to make sense, and $|s|$ in some places as well, all of which can be ignored for real $s$.

Why do we have equation (1)? This is quite an important question, and I won't give a complete answer here. For a complete proof see Titchmarsh's book, or Montgomery and Vaughn's Multiplicative number theorem.

Using some complex analysis (we need some lemmas bounding certain things so everything works out nicely) we can prove that $$ \sum_{p^k\leq x} \log p=x-\sum_{\rho:\zeta(\rho)=0}\frac{x^\rho}{\rho}-\frac{\zeta'(0)}{\zeta(0)}. $$
The left hand side is a step function which jumps on the prime powers (often written as $\psi(x)=\sum_{n\leq x}\Lambda(n)$ whereas the right hand side is a continuous function plus a sum over all of the zeros of the function zeta function. The zeros magically conspire at prime powers to make this conditionally convergent series suddenly jump. We can remove the trivial zeros and create an error bounded by $\log x$, so that this sum really depends on the zeros of zeta. Specifically, if we can bound the real part of the zeros, then we can bound this error term. (Being careful about convergence and all that, and taking certain limits properly) The best bound possible is $\text{Re}(s)=\frac {1}{2}$, which is why the best error will be just slightly larger then $\sqrt{x}$. (About $\log^2x$ larger) Using partial summation then takes us to a bound for $\pi(x)$, in particular we get $(1)$.

I hope this gives an idea why it is true, I suggest looking in some of those books. Another good question to ask is why does equation $(0)$ hold? This requires even more time to prove, as we need construct a zero free region for $\zeta(s)$. (Again this will be in Montgomery and Vaughn's book)

Hope that helps,


The exact answer is theorem IV due to von Mangoldt and reproduced in Landau"s book "Handbuch der Lehre von der Verteilung der Primzahlen", visible at Google books.