$$ \begin{align} 1100 & = 2\times2\times5\times5\times11 \\ 1101 & =3\times 367 \\ 1102 & =2\times19\times29 \\ 1103 & =1103 \\ 1104 & = 2\times2\times2\times2\times 3\times23 \\ 1105 & = 5\times13\times17 \\ 1106 & = 2\times7\times79 \end{align} $$ In looking at this list of prime factorizations, I see that all of the first 10 prime numbers, 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, appear within the factorizations of only seven consecutive integers. (The next prime number, 31, has its multiples as far from 1100 as it could hope to get them (1085 and 1116).) So no nearby number could hope to be divisible by 29 or 23, nor even by 7 for suitably adjusted values of "nearby". Consequently when you're factoring nearby numbers, you're deprived of those small primes as potential factors by which they might be divisible. So nearby numbers, for lack of small primes that could divide them, must be divisible by large primes. And accordingly, not far away, we find $1099=7\times157$ (157 doesn't show up so often---only once every 157 steps---that you'd usually expect to find it so close by) and likewise 1098 is divisible by 61, 1108 by 277, 1096 by 137, 1095 by 73, 1094 by 547, etc.; and 1097 and 1109 are themselves prime.

So if an unusually large number of small primes occur unusually close together as factors, then an unusually large number of large primes must also be in the neighborhood.

Are there known precise results quantifying this phenomenon?


Solution 1:

The phenomenon you describe seems to be the concept behind the Sieve of Eratosthenes.

Every number below $ \sqrt{N} $ where $ N $ is the number you want to factorize will appear at least once in the list of possible factors. Looking at the list generated by the Sieve, it will become obvious that only primes remain. A factorization using the Sieve then implies trial division by the primes up to $ \sqrt{N}$. This concept lets us see that obviously, if "many" small primes were used to remove elements in the Sieve, the remaining elements will have larger prime factors, possibly being primes themselves.

However: no generalization of any kind is possible, since every number's occurence is cyclic. Think of a Fourier Transform , using prime factors as frequencies. 2 appears every second number, 3 every third and so on. At any point $N$, there is no way to determine if there are primes nearby of if the numbers nearby will have "large primes" as factors without it's value.

I could also relate what you're saying to the concept of Mersenne Primes. Essentially, primes in the form $2^{n}–1$ the largest known being $(2^{43,112,609} – 1)$, which also happens to be the largest prime known. In this case, they are looking for primes in the viscinity of exponents of 2, which is also saying the that largest factor is a large prime, right?

So yes, it stands to reason that if $N$ isn't a prime and has small factors, numbers nearby have a chance to have greater factors. No quantification of that is useful, however.

Solution 2:

If you take $7$ consecutive numbers, the chance that one of them is divisible by a prime $p$ is $\dfrac{7}{p}$. If $x$ of those $7$ numbers are prime, then the chance that one of the remaining $7-x$ numbers is divisible by a prime $p$ is still $\dfrac{7}{p}$. This is why the numbers around primes have relatively more prime factors than numbers which are farther away from primes.

Solution 3:

It seems to me you could examine your conjecture experimentally by taking suitable definitions of "small window" and "large window", S(.) and L(.) say. You also need some notion of 'density of small prime factors', say D.

Your conjecture then seems to be that the frequency of large prime factors in L(.) is monotonically related to DS(.). i.e. as one increases, so does the other.