Solution 1:

there are many ways to see that your result is the right one. What does the right one mean?

It means that whenever such a sum appears anywhere in physics - I explicitly emphasize that not just in string theory, also in experimentally doable measurements of the Casimir force (between parallel metals resulting from quantized standing electromagnetic waves in between) - and one knows that the result is finite, the only possible finite part of the result that may be consistent with other symmetries of the problem (and that is actually confirmed experimentally whenever it is possible) is equal to $-1/12$.

It's another widespread misconception (see all the incorrect comments right below your question) that the zeta-function regularization is the only way how to calculate the proper value. Let me show a completely different calculation - one that is a homework exercise in Joe Polchinski's "String Theory" textbook.

Exponential regulator method

Add an exponentially decreasing regulator to make the sum convergent - so that the sum becomes $$ S = \sum_{n=1}^{\infty} n e^{-\epsilon n} $$ Note that this is not equivalent to generalizing the sum to the zeta-function. In the zeta-function, the $n$ is the base that is exponentiated to the $s$th power. Here, the regulator has $n$ in the exponent. Obviously, the original sum of natural numbers is obtained in the $\epsilon\to 0$ limit of the formula for $S$. In physics, $\epsilon$ would be viewed as a kind of "minimum distance" that can be resolved.

The sum above may be exactly evaluated and the result is (use Mathematica if you don't want to do it yourself, but you can do it yourself) $$ S = \frac{e^\epsilon}{(e^\epsilon-1)^2} $$ We will only need some Laurent expansion around $\epsilon = 0$. $$ S = \frac{1+\epsilon+\epsilon^2/2 + O(\epsilon^3)}{(\epsilon+\epsilon^2/2+\epsilon^3/6+O(\epsilon^4))^2} $$ We have $$ S = \frac{1}{\epsilon^2} \frac{1+\epsilon+\epsilon^2/2+O(\epsilon^3)}{(1+\epsilon/2+\epsilon^2/6+O(\epsilon^3))^2} $$ You see that the $1/\epsilon^2$ leading divergence survives and the next subleading term cancels. The resulting expansion may be calculated with this Mathematica command
1/epsilon^2 * Series[epsilon^2 Sum[n Exp[-n epsilon], {n, 1, Infinity}], {epsilon, 0, 5}]

and the result is $$ \frac{1}{\epsilon^2} - \frac{1}{12} + \frac{\epsilon^2}{240} + O(\epsilon^4) $$ In the $\epsilon\to 0$ limit we were interested in, the $\epsilon^2/240$ term as well as the smaller ones go to zero and may be erased. The leading divergence $1/\epsilon^2$ may be and must be canceled by a local counterterm - a vacuum energy term. This is true for the Casimir effect in electromagnetism (in this case, the cancelled pole may be interpreted as the sum of the zero-point energies in the case that no metals were bounding the region), zero-point energies in string theory, and everywhere else. The cancellation of the leading divergence is needed for physics to be finite - but one may guarantee that the counterterm won't affect the finite term, $-1/12$, which is the correct result of the sum.

In physics applications, $\epsilon$ would be dimensionful and its different powers are sharply separated and may be treated individually. That's why the local counterterms may eliminate the leading divergence but don't affect the finite part. That's also why you couldn't have used a more complex regulator, like $\exp(-(\epsilon+\epsilon^2)n)$.

There are many other, apparently inequivalent ways to compute the right value of the sum. It is not just the zeta function.

Euler's method

Let me present one more, slightly less modern, method that was used by Leonhard Euler to calculate that the sum of natural numbers is $-1/12$. It's of course a bit more heuristic but his heuristic approach showed that he had a good intuition and the derivation could be turned into a modern physics derivation, too.

We will work with two sums, $$ S = 1+2+3+4+5+\dots, \quad T = 1-2+3-4+5-\dots $$ Extrapolating the geometric and similar sums to the divergent (and, in this case, marginally divergent) domain of values of $x$, the expression $T$ may be summed according to the Taylor expansion $$ \frac{1}{(1+x)^2} = 1 - 2x + 3x^2 -4x^3 + \dots $$ Substitute $x=1$ to see that $T=+1/4$. The value of $S$ is easily calculated now: $$ T = (1+2+3+\dots) - 2\times (2+4+6+\dots) = (1+2+3+\dots) (1 - 4) = -3S$$ so $S=-T/3=-1/12$.

A zeta-function calculation

A somewhat unusual calculation of $\zeta(-1)=-1/12$ of mine may be found in the Pictures of Yellows Roses, a Czech student journal. The website no longer works, although a working snapshot of the original website is still available through the WebArchive (see this link). A 2014 English text with the same evaluation at the end can be found at The Reference Frame.

The comments were in Czech but the equations represent bulk of the language that really matters, so the Czech comments shouldn't be a problem. A new argument (subscript) $s$ is added to the zeta function. The new function is the old zeta function for $s=0$ and for $s=1$, it only differs by one. We Taylor expand around $s=0$ to get to $s=1$ and we find out that only a finite number of terms survives if the main argument $x$ is a non-positive integer. The resulting recursive relations for the zeta function allow us to compute the values of the zeta-function at integers smaller than $1$, and prove that the function vanishes at negative even values of $x$.

Solution 2:

Here is a variant on Lubos Motl's answer:

Let $S = \sum_{n=1}^{\infty} n$. Then $S - 4 S = \sum_{n = 1}^{\infty} (-1)^{n-1} n.$ We will evaluate this latter expression with a regularization similar to Lubos Motl's.

Namely, consider $$\sum_{n=1}^{\infty} (-1)^{n-1} n t^n = -t \dfrac{d}{dt} \sum_{n=1}^{\infty} (-t)^n = -t \dfrac{d}{dt} \dfrac{1}{1+t} = \dfrac{t}{(1+t)^2}.$$
Letting $t \to 1,$ we find that $-3 S = \dfrac{1}{4}$, and hence that $S = \dfrac{-1}{12}.$


To see the relationship between this approach and Lubos Motl's, note that if we write $t = e^{\epsilon},$ then $t\dfrac{d}{dt} = \dfrac{d}{d\epsilon},$ so in fact the arguments are essentially the same, except that Lubos doesn't perform the initial step of replacing $S$ by $S - 4S$, which means that he has the pole $\dfrac{1}{\epsilon^2}$ which he then subtracts away.


As far as I know, this trick of replacing $\zeta(s)$ by $(1-2^{-s+1})^{-1}\zeta(s)$ is due to Euler, and it is a now standard method for replacing $\zeta(s)$ by a function which carries the same information, but does not have a pole at $s = 1$. The evaluation of $\zeta(s)$ at negative integers by passing to $(1-2^{-s+1})\zeta(s)$ and then performing Abelian regularization as above is also due to Euler, I believe. It is easy to see the Bernoulli numbers appearing in this way, for example.


Of course, taken literally, the series $\sum_{n=1}^{\infty} n$ diverges to $+\infty$, so any attempt to assign it a finite value will involve some form of regularization. Analytic continuation of the $\zeta$-function is one form of regularization, and the Abelian regularization that Lubos Motl and I are making is another. I can't quote a precise theorem to this effect (although maybe others can), but with such a simple expression as $\sum_{n = 1}^{\infty} n,$ I'm reasonably confident that any sensible regularization will necessarily yield the same value of $\dfrac{-1}{12}$. (Lubos Motl makes the same assertion in his answer.)