Probability that two random numbers are coprime is $\frac{6}{\pi^2}$

Let's look at the function $$S(x)=\sum_{\begin{array}{c} m,n\leq x\\ \gcd(m,n)=1\end{array}}1.$$

Then notice that $$S(x)=\sum_{m,n\leq x}\sum_{d|\gcd(m,n)}\mu(d)=\sum_{d\leq x}\sum_{r,s\leq\frac{x}{d}}\mu(d)= \sum_{d\leq x}\mu(d)\left[\frac{x}{d}\right]^{2}$$

From here it is straight forward to see that in the limit, $$\frac{S(x)}{x^2}\rightarrow\sum_{n=1}^\infty \frac{\mu(n)}{n^2}=\frac{1}{\zeta(2)}=\frac{6}{\pi^2}.$$

However, there are still some interesting questions here. How fast does in converge, and what are the secondary terms? It turns out we can easily relate this to the summatory totient function, which has a rich history. See these two math stack exchange posts: Totient function, Asymptotic formula. What follows below is a modification of my answer on the second post.

The History Of The Error Term

In 1874, Mertens proved that $$S(x)=\frac{6}{\pi^{2}}x^{2}+O\left(x\log x\right).$$ Throughout we use $E(x)=S(x)-\frac{6}{\pi^2}x^2$ for the error function.

The best unconditional result is given by Walfisz 1963: $$E(x)\ll x\left(\log x\right)^{\frac{2}{3}}\left(\log\log x\right)^{\frac{4}{3}}.$$

In 1930, Chowla and Pillai showed this cannot be improved much more, and that $E(x)$ is not $$o\left(x\log\log\log x\right).$$

In particular, they showed that $\sum_{n\leq x}E(n)\sim\frac{3}{\pi^{2}}x^{2}$ so that $E(n)\asymp n$ on average. In 1950, Erdos and Shapiro proved that there exists $c$ such that for infinitely many positive integers $N,M$ we have $$E(N)>cN\log\log\log\log N\ \ \text{and}\ \ E(M)<-cM\log\log\log\log M, $$

or more concisely

$$E(x)=\Omega_{\pm}\left(x\log\log\log\log x\right).$$

In 1987 Montgomery improved this to

$$E(x)=\Omega_{\pm}\left(x\sqrt{\log\log x}\right).$$

Hope you enjoyed that,

Added: At some point, I wrote a long blog post about this, complete with a proof of Montgomery's lower bound.


Here is a fairly easy approach

Let us start with a basic observation:

$\bullet$ Every integer has the probability "1" to be divisible by 1.

$\bullet$ A given integer is either even or odd hence has probability $"1/2"$ to be divisible by 2

$\bullet$ Similarly, an integer has a probability $"1/3"$ to be divisible by 3. Because any interger is either the form $3k, 3k+1$ or $3k+2$.

Conjecture More generally, one integer chosen amongst $"p"$ other integers has one chance to be divisible by $p$

  1. From this we infer that the probability that an integer is divisible by $p$ is $\frac{1}{p}$. This is having one chance over $p-$chances to be divisible by $p$.
  2. Therefore the probability that two different integers are both simultaneously divisible by a prime $p$ is $\frac{1}{p^2}$
  3. This means that the probability that two different integers are not simultaneously divisible by a prime $p$ is $$1-\frac{1}{p^2}$$
    1. Conclusion: The probability that two different integers are never simultaneously divisible by a prime (meaning that they are co-prime) is therefore given by
      $$ \color{blue}{\prod_{p, prime}\left(1-\frac{1}{p^2} \right) = \left(\prod _{p, prime}\frac {1}{1-p^{-2}}\right)^{-1}=\frac {1}{\zeta (2)}=\frac {6}{\pi ^{2}} \approx 0,607927102 ≈ 61 \%}$$

Where should recall from the Basel problem we have the following Euler identity

$$\frac{\pi^2}{6}=\sum_{n=1}^{\infty} \frac{1}{n^2} = \zeta(2)=\prod _{p, prime}\frac {1}{1-p^{-2}} $$

By a similar token, the probability that $m$ integers are co-prime is given by

$$ \color{red}{\prod_{p, prime}\left(1-\frac{1}{p^m} \right) = \left(\prod _{p, prime}\frac {1}{1-p^{-m}}\right)^{-1}=\frac {1}{\zeta (m)}}$$

Here $\zeta$ is the Riemann zeta function. $$\zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^s} $$