Why does the harmonic series diverge but the p-harmonic series converge
I am struggling understanding intuitively why the harmonic series diverges but the p-harmonic series converges. I know there are methods and applications to prove convergence, but I am only having trouble understanding intuitively why it is. I know I must never trust my intuition, but this is hard for me to grasp. In both cases, the terms of the series are getting smaller, hence are approaching zero, but they both result in different answers. $$\sum_{n=1}^{\infty}\frac{1}{n}=1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\cdots = \text{diverges}$$ $$\displaystyle \sum_{n=1}^{\infty}\frac{1}{n^2}=1+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{4^2}+\cdots =\text{converges}$$
Solution 1:
Firstly, you should always use your intuition. If you find that your intuition was correct, then smile. If you find that your intuition was wrong, use the experience to fine-tune your intuition.
I hope I'm interpreting you question correctly - here goes. Since you are not interested in any of the proofs, I'll just focus on intuition. Now, let's consider a series of the from $\sum _n \frac{1}{n^p}$, with $p>0$ a parameter. Intuitively, the convergence or divergence of the series depends on how fast the general term $\frac{1}{n^p}$ tends to $0$. This is so because the sum is that of infinitely many positive quantities. If these quantities converge to $0$ too slow, the number of summands in each partial sum will be more dominant than the magnitude of the summands. However, if the quantities converge to $0$ fast enough, then in each partial sum the magnitude of the summands will be dominated by numbers of small magnitude, and thus outweigh the fact that there are lots of summands.
So, the question is how fast does $\frac{1}{n^p}$ converge to $0$. Let's look at some extreme values of $p$. If $p$ is very large, say $p=1000$, then $\frac{1}{n^p}$ becomes very small very fast (experiment with computing just a few values to see that). So, when $p$ is large, it seems the general term converges to $0$ very fast, and thus we'd expect the series to converge. However, if the value of $p$ is very small, say $p=\frac{1}{1000}$, then $\frac{1}{n^p}$ is actually pretty large for the first few possibilities for $n$, and while it does monotonically tend to $0$, it does so very slowly. So, we'd expect the series to diverge when $p$ is small.
Now, if $0<p<q$ then $\frac{1}{n^q}<\frac{1}{n^p}$, so the bigger the parameter the faster the convergence of the general term to $0$ gets. So, small values for the parameter imply diverge of the series, while large values of the parameter imply convergence of the series. So, somewhere in the middle there has to be a value $b$ for the parameter such that if $p<b$ then the series diverges, while if $p>b$ then the series converges.
So, just by this straightforward analysis of the behaviour with respect to varying the parameter $p$, we know (intuitively) that there must be some cut-off value for $p$ that is the gateway between convergence and divergence. What happens at the that gateway value for $p$ is unclear, and there is no compelling reason to suspect one behaviour of the series over another. Now, the particular whereabouts of that special gateway value for $p$ should depend strongly on the particularities of the general term. This is thus where you'll have to delve into more rigorous proofs.
I hope this rather lengthy answer addresses what you were wondering about. Basically, it says that a cutoff parameter must exist, but we can't expect to say anything about its whereabouts nor the behaviour at that cutoff value without careful study of the general terms.
Solution 2:
We produce two series that are close in spirit to the series you mentioned. Perhaps the divergence of the first, and the convergence of the second, will be clearer.
Consider the series $$\frac{1}{2}+\frac{1}{4}+\frac{1}{4}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{16}+\frac{1}{16}+\frac{1}{16}+\frac{1}{16}+\frac{1}{16}+\cdots.$$ So there is $1$ term equal to $\frac{1}{2}$, then a block of $2$ each equal to $\frac{1}{4}$, then a block of $4$ each equal to $\frac{1}{8}$, then a block of $8$ each equal to $\frac{1}{16}$, and so on forever. Each block has sum $\frac{1}{2}$, so if you add enough terms, your sum will be very big. But it will take an awful lot of terms to add up to $1000$, many more more terms than there are atoms in the universe. Note that each term is less than the corresponding term in the harmonic series, so if you add together enough terms of the harmonic series, the sum will be very big.
Now consider the series $$\frac{1}{1^2}+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{4^2}+\frac{1}{4^2}+\frac{1}{4^2}+\frac{1}{4^2}+\frac{1}{8^2}+\frac{1}{8^2}+\frac{1}{8^2}+\frac{1}{8^2}+\frac{1}{8^2}+\cdots.$$ Each term is $\ge$ the corresponding term in the series $1+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{4^2}+\frac{1}{5^2}+\cdots$.
Again, we find the sums of the blocks. The first block has sum $1$. The second has sum $\frac{1}{2}$. The third has sum $\frac{1}{4}$. The fourth has sum $\frac{1}{8}$, and so on forever. So if we add up "all" the terms, we get sum $1+\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\cdots$, a familiar series with sum $2$.
Solution 3:
Intuitively the main argument why the harmonic series diverge is that $\forall k \sum_{n=k}^{n=2k}\frac{1}{n}>k\frac{1}{2k}=\frac{1}{2}$ since smallest element is $\frac{1}{2k}$ and there are k elements in the interval $[k;2k]$. So the harmonic sum for any finite interval $[k;2k]$ is > 0.5.
So if you split the infinite interval over which you make the summation into intervals $[1;k][k;2k][2k;4k],...,[2^nk;2^{n+1}k],...$ each interval has sum higher then 0.5 and since there are infinite number of this intervals to cover the whole $\mathbb{Z}$ it diverges.
Basically they get smaller and smaller, but not fast enough to converge to a limit.
The p-harmonic on the other hand because of the square in the denominator can not have this "ability" and converge, aka they get smaller faster enough.
To try to explain it on intuitive level better thing like this: you are adding infinite number of the sequence, so in order to converge to a limit L they have to get smaller with some "speed". Now even if they get close to 0 if the speed at which they grow smaller is not high enough then they still will be too much stuff added and will never converge.
Solution 4:
If you convert the sum to an integral, $$ \int_1^\infty \frac{1}{x^2} dx = -\frac{1}{x}|_1^\infty = -\frac{1}{\infty} + 1 $$ converges, but $$ \int_1^\infty \frac{1}{x} dx = \ln x|_1^\infty = \ln \infty $$ doesn't.
Solution 5:
I think the best way to understand this intuitively is to look at the graphs of the representative functions, in particular, (1/x) and (1/x^2). You will see that the latter decreases much faster than the former, which means that it will reach 0, in a fantastical sense, "faster" than (1/x). Now, a series/sum can be thought of as a rough integral, so consider the area under each graph. As you go out to infinity, the area under the graph of (1/x^2) becomes much smaller much faster since, again, it is decreasing faster than (1/x). Therefore, less and less area is being added to the sum as you reach higher and higher values for n (the series substitute for x). However, with (1/x) it just so happens that the area under the graph of the function does not decrease as quickly. Rather, the area under the graph of the function is still significant enough to count towards the sum. Therefore, there must be some point on [1,2] where a split between converging and diverging occurs. And, though it may not seem so, (1/x^2) is an extreme case. Even much smaller values for p that are still greater than 1 converge. It just so happens that p=1 is the end of the field of numbers that allows this series to diverge, and everything greater than p=1 converges. In general, this is something that is accepted as mathematical truth.