Can we show that $1+2+3+\dotsb=-\frac{1}{12}$ using only stability or linearity, not both, and without regularizing or specifying a summation method?

Regarding the proof by Tony Padilla and Ed Copeland that $1+2+3+\dotsb=-\frac{1}{12}$ popularized by a recent Numberphile video, most seem to agree that the proof is incorrect, or at least, is not sufficiently rigorous. Can the proof be repaired to become rigorously justifiable? If the proof is wrong, why does the result it computes agree with other more rigorous methods?

Critiques of the proof seem to fall into two classes.

One class of responses is to appeal to higher math for justification. For example, appeal to zeta function regularization, as is done by Edward Frenkel in a followup video for Numberphile and a more recent video by Mathologer, or to use an exponential regulator, as shown by Luboš Motl here on M.SE, or a smooth cutoff regulator as in Terry Tao's wonderful blog post on the subject. These are great, but don't really address what's wrong with the naive computation.

Another response is that the sum is infinite, the series is divergent, and the manipulations are wrong, because manipulations of divergent series can lead to inconsistent results. See for example the answer by robjohn at this question, where he uses similar manipulations to show that the sum must also be $-\frac{7}{12}$. A similar contradiction is shown at the beginning of Tao's post. And Wikipedia's article on the series has a subheading showing that any stable and linear summation method which sums the series implies 0=1. See also Hagen von Eitzen's answer here for some excellent discussion. A reference to the Riemann series theorem may also be appropriate here. These responses are perhaps too dismissive, since there are a variety of rigorous ways to assign finite numbers to divergent sums. People say you have to be careful with divergent series, but few seem to be willing to say what steps the careful observer may take.

Proofs of the type given in that first Numberphile video can be valid if one is careful. For comparison, without specifying a summation method, but just manipulating series in a similar fashion, you can show that the geometric series $1+x+x^2+\dotsb$ sums to $\frac{1}{1-x}$, which is valid for any value of $x$ for which there exists any stable linear summation method. So for example once we know that $1+2+4+8+_\dotsb$ converges 2-adically, without any further information we know that its sum must be $-1$, even though classical summation cannot sum the divergent series.

With that in mind, let's reexamine the computation in the Numberphile video (which I've pasted below in its entirety, copied from Kenny LJ from this question with some edits for understability and rigor). He uses the Grandi series $1-1+1-1+\dotsb = \frac{1}{2}$ and the series $1-2+3-4+\dotsb=\frac{1}{4}$ to derive his result. These series are Cesàro summable, and Cesàro summation is stable and linear, and which allows the manipulations to be justified. I think this first half of the computation is completely justifiable. All we lack is a proof that the two series are Cesàro summable, which is not hard.

However the third series $1+2+3+\dotsb$ is not Cesàro summable, nor indeed can any stable linear method sum it, as already mentioned. Zeta function regularization can sum it. Given a series $\sum a_n$, we may perform analytic continuation of $\sum \dfrac{1}{a_n^s}$ to $s=-1$. This is stable, but not linear. Alternatively the Dirichlet series can sum it. That is analytic continuation of $\sum \dfrac{a_n}{n^s}$ to $s=0$, which is linear but not stable. Either method can sum $1+2+3+\dotsb$ and in fact the two methods coincide for this series.

So I think is the error in the Numberphile video, where they write $(0 + 4 + 0 + 8 + 0 + 12 + \dotsb) = 4+8+12+\dotsb$. If you want to use a linear summation method, you should not also assume stability.

Can the calculation be saved? Can we show by a naive computation using only linearity but not stability, or vice versa only stability but not linearity, that $1+2+3+\dotsb=-\frac{1}{12}$, without an explicit choice of a summation method?

This question is mostly a duplicate of What mistakes, if any, were made in Numberphile's proof that $1+2+3+\cdots=-1/12$?, or perhaps What consistent rules can we use to compute sums like 1 + 2 + 3 + ...? or What can be computed by axiomatic summation? which I am reposting with more context, because I feel that the answers there did not completely engage or address the question.


Numberphile's Proof of $1+2+3+\dotsb=-\frac{1}{12}$.

$S_1 = 1 - 1 + 1 - 1 + 1 - 1 + \dotsb$

$S_1 = 1 + (-1 + 1 - 1\dotsb)$ using stability

$-1 + 1 - 1\dotsb = -S_1$ using linearity

hence $S_1=1-S_1$ and $S_1=\frac{1}{2}$.

$S_2 = 1 - 2 + 3 - 4 + \dotsb $

$S_2' = 0 + 1 - 2 + 3 - 4 + \dotsb = 0 + S_2 = S_2$ by stability

$S_2 + S_2' = 1 - 1 + 1 - 1 \dotsb = 2S_2$ by linearity

hence $2S_2 = S_1 = \frac{1}{2}$ and $S_2=\frac{1}{4}.$.

$S_3 = 1 + 2 + 3 + 4 + \cdots $

Finally, take

\begin{align} S_3 - S_2 & = 1 + 2 + 3 + 4 + \cdots \\ & - (1 - 2 + 3 - 4 + \cdots) \\ & = 0 + 4 + 0 + 8 + \cdots \\ & = 4 + 8 + 12 + \cdots \\ & = 4S_3 \end{align} here we used linearity to get to line 3, stability to get to line 4, and linearity to get to line 5.

Hence $-S_2=3S_3$ or $-\frac{1}{4}=3S_3$.

And so $S_3=-\frac{1}{12}$. $\blacksquare$


Wikipedia's proof that stable linear summation methods cannot sum $1+2+3+\dotsb$

$S_3=1 + 2 + 3 + \dotsb$

$S_3' = 0 + 1 + 2 + 3 + \dotsb = 0 + S_3 = S_3$ by stability.

$S_4 = S_3 - S_3' = 1 + 1 + 1 + \dotsb = S_3 - S_3 = 0$

$S_4' = 0 + 1 + 1 + 1 + \dotsb = 0 = S_4$ by stability again,

and $S_4 - S_4' = 1 + 0 + 0 + \dotsb = 1$ by linearity, which is a contradiction.



Solution 1:

I'm not sure that I got your requirement of "either linearity or stability but not both" correctly, because I'm not firm with that terminology.
But assumed I got that correctly, I think the summation method proposed by Helmut Hasse[1] involves linearity (can be expressed as a matrix summation method), but does not allow an index shift by more that one index.

Notation: Consider a rowvector $Z(s) = [1/1^s,1/2^s,1/3^s,...]$ (all following vectors and matrices ideally of infinite size) . We denote such a vector, when taken as columnvector by a prefixed "c" : $ ^cZ(s)$ , and when taken as diagonalmatrix by a prefixed "d": $ ^dZ(s) $

Next, consider a rowvector $V(x) = [1,x,x^2,x^3,...]$ with the same notations $ ^cV(x)$ and $ ^dV(x)$ accordingly.

Convergent cases
For the convergent cases of zeta-summation we can simply write the dotproduct of $Z()$ and $V(1)$ , for instance $ \zeta(4) = Z(4) \cdot ^cV(1) $ . But of course we can use the same method as for the divergent cases described below.

Divergent cases
Helmut Hasse's method can now be written as a matrix-summation method.
Denote the upper triangular Pascalmatrix as $P$ and introduce the modification $\large \Delta \small = \;^dV(-1) \cdot P$ where $$ P = \left[\small \begin{array} {rrr} 1&1&1&1& \cdots \\ .&1&2&3& \cdots \\ .&.&1&3& \cdots \\ .&.&.&1& \cdots \\ \vdots &\vdots &\vdots &\vdots & \ddots \end{array} \right] \qquad \Delta= \left[\small \begin{array} {rrr} 1&1&1&1& \cdots \\ .&-1&-2&-3& \cdots \\ .&.&1&3& \cdots \\ .&.&.&-1& \cdots \\ \vdots &\vdots &\vdots &\vdots & \ddots \end{array} \right]$$

$\qquad$ Example $\zeta(-3)$ : Our goal is to demonstrate the summation of $\zeta(-1)$ , but to make things more explicite, we use first $\zeta(-3)$

Compute $\zeta(-3) \underset{\text{analy.} \\ \text{contd.}}= 1^3+2^3+3^3+4^3+...$ by three steps:

  1. Set $s=-3$. Introduce $ \small W(s) = Z(s -1) \cdot \large \Delta \small \qquad \qquad= [1, -15, 50, -60, 24, 0, 0, ... ]$
    (where in the resulting vector $W(-3)$ all trailing entries are zero)

  2. compute $\small y = W(s) \cdot ^cZ(1) \qquad \qquad = \frac11-\frac{15}2+\frac{50}3-\frac{60}4+\frac{24}5 = -1/30 $
    (which is of course the Bernoulli number $B_4$)

  3. compute $ \small \zeta(-3)\underset{\text{Hasse}}= y/(s-1) = 1/120$

$\qquad$ Example $\zeta(-1)$: To compute $\zeta(-1)$ we simply insert $ s=-1 $ and find
$$ \zeta(s) \underset{\text{Hasse}} = Z(s-1) \cdot \; \large \Delta \small \cdot \;^cZ(1) / (s-1) \qquad \qquad = -\frac1{12}$$


This method applies linearity in/assumes linearity of the method ; but is is not stable(?): we can shift the sequence of numbers in $ ^cZ(s)$ by one row and arrive at the same result but not more, so this is -if I understand your terminology correctly- "unstable" against higher order shiftings. So possibly this answer matches your question.

  • Note: the method of [H. Hasse][1] makes use of a binomially composition of the powers of consecutive natural numbers which was already known earlier; also K. Knopp has a bit before done a very similar summation using $ ^cV(1/2)$ instead of $ ^cZ(1)$ (and does then not find the values for the zeta, but for the alternating zetas, often denoted by $\eta$ ("eta")); the version of H. Hasse however using $ \;^cZ(1)$ was/is still unique. I think there are some references in wikipedia in one of the pages for divergent summation for the zeta.

    [1] Helmut Hasse: "Ein Summeriungsverfahren für die Riemannsche Zeta-Funktion" (1930)