When exploring the divergent series consisting of the sum of all natural numbers

$$\sum_{k=1}^\infty k=1+2+3+4+\ldots$$

I came across the following identity involving a one-sided limit:

$$\lim_{x\to0^-}\sum_{k=1}^\infty k\exp(kx)\cos(kx)=-\frac{1}{12}$$

Using zeta function regularization yields the same value:

$$\sum_{k=1}^\infty k^{-s}=\zeta(s)$$ $$\zeta(-1)=-\frac{1}{12}$$

In general, I found that for $y\neq0$ and $n=1,5,9,13,\ldots$ (i.e. $4m+1$ where $m\in\mathbb{N}$),

$$\lim_{x\to0^-}\sum_{k=1}^\infty k^n\exp(kxy)\cos(kxy)=\zeta(-n)$$

However, a similar limit did not exist for other powers of $k$, e.g.

$$\lim_{x\to 0^-}\sum_{k=1}^\infty k^2\exp(kx)\cos(kx)$$ $$\lim_{x\to 0^-}\sum_{k=1}^\infty k^3\exp(kx)\cos(kx)$$

The regularized values of the corresponding series are $\zeta(-2)=0$ and $\zeta(-3)=\frac{1}{120}$.

Given this information, I have the following questions:

  1. What is the connection between the limit approach and the zeta function approach?
  2. Why does the limit expression only seem to converge for $n=4m+1$?
  3. Can the limit approach be used to find the sum for other powers of $k$? If so, how?

Here is a plot I made with Mathematica using the command

Plot[Evaluate[Sum[k*Exp[x*k]*Cos[x*k], {k, 1, Infinity}]], {x, -16, 16}, PlotRange -> {-.25, .25}, AspectRatio -> 1]

enter image description here

Notice how it approaches $-1/12\approx-0.08333$ as $x$ approaches $0$.

Further information:

$$\sum_{k=1}^\infty k^n\exp(kx)=\mathrm{Li}_{-n}(\exp(x))=\frac{n!}{(-x)^{n+1}}+\zeta(-n)+O(x)$$


Edit:

Based on Micah's answer, we seek an $f$ such that $f(s,0) = 1$ and \begin{align} \int_0^\infty x^s f(s,x) \,\mathrm{d}x &= 0 \end{align}

Let \begin{align} f(s,x) &= \mathrm{e}^{-x}(1+ax) \end{align}

Then $f(s,0) = 1$. Assuming $\mathrm{Re}(s) > -1$, \begin{align} 0 &= \int_0^\infty x^s f(s,x) \,\mathrm{d}x \\ &= \int_0^\infty x^s \mathrm{e}^{-x}(1+ax) \,\mathrm{d}x \\ &= \int_0^\infty x^s \mathrm{e}^{-x} \,\mathrm{d}x + a \int_0^\infty x^{s+1} \mathrm{e}^{-x} \\ &= \Gamma(s+1) + a (s+1) \Gamma(s+1) \\ &= (1 + a(s+1)) \Gamma(s+1) \\ \Longrightarrow a &= -\frac{1}{s+1} \\ \Longrightarrow f(s,x) &= \mathrm{e}^{-x}\left( 1 - \frac{x}{s+1} \right) \end{align}

Thus \begin{align} \zeta(-s) &= \lim_{\varepsilon \rightarrow 0^+} \lim_{m \rightarrow \infty} \sum_{n=1}^m n^s f(s, n \varepsilon) \\ &\overset{\star}{=} \lim_{m \rightarrow \infty} \lim_{\varepsilon \rightarrow 0^+} \sum_{n=1}^m n^s f(s, n \varepsilon) \\ &= \lim_{m \rightarrow \infty} \sum_{n=1}^m n^s f(s, 0) \\ &= \lim_{m \rightarrow \infty} \sum_{n=1}^m n^s \\ &= \sum_{n=1}^\infty n^s \end{align}

where the star indicates the non-rigorous step of exchanging limits. In fact, this regulator seems to work for all $s \neq -1$ (see some plots for $s < -1$ below). Hence we can also regularize the harmonic series as follows: \begin{align} \gamma &= \frac{1}{2} \lim_{\varepsilon \rightarrow 0^+} (\zeta(1+\varepsilon) + \zeta(1-\varepsilon)) \\ &= \frac{1}{2} \lim_{\varepsilon \rightarrow 0^+} \left(\lim_{m \rightarrow \infty} \sum_{n=1}^m n^{-1-\varepsilon} f(-1-\varepsilon, n \varepsilon) + \lim_{m \rightarrow \infty} \sum_{n=1}^m n^{-1+\varepsilon} f(-1+\varepsilon, n \varepsilon) \right) \\ &= \frac{1}{2} \lim_{\varepsilon \rightarrow 0^+} \lim_{m \rightarrow \infty} \left(\sum_{n=1}^m n^{-1-\varepsilon} f(-1-\varepsilon, n \varepsilon) + \sum_{n=1}^m n^{-1+\varepsilon} f(-1+\varepsilon, n \varepsilon) \right) \\ &\overset{\star}{=} \frac{1}{2} \lim_{m \rightarrow \infty} \lim_{\varepsilon \rightarrow 0^+} \left(\sum_{n=1}^m n^{-1-\varepsilon} f(-1-\varepsilon, n \varepsilon) + \sum_{n=1}^m n^{-1+\varepsilon} f(-1+\varepsilon, n \varepsilon) \right) \\ &= \frac{1}{2} \lim_{m \rightarrow \infty} \left(\sum_{n=1}^m n^{-1} f(-1, 0) + \sum_{n=1}^m n^{-1} f(-1, 0) \right) \\ &= \frac{1}{2} \lim_{m \rightarrow \infty} \left(\sum_{n=1}^m n^{-1} + \sum_{n=1}^m n^{-1} \right) \\ &= \lim_{m \rightarrow \infty} \sum_{n=1}^m n^{-1} \\ &= \sum_{n=1}^\infty n^{-1} \\ \end{align}

Here is the Mathematica code and its plots:

f[s_, x_] := Exp[-x] (1 + a x)
g[s_, t_] := 
 Evaluate@Simplify[
   f[s, t] /. Solve[Integrate[x^s f[s, x], {x, 0, Infinity}] == 0, a],
    Assumptions -> Re[s] > -1]
g[s, t]
Table[{s, 
    Plot[{Zeta[-s], 
      Sum[n^s g[s, n \[Epsilon]], {n, 1, 1000}]}, {\[Epsilon], 0, 1}, 
     Evaluated -> True]}, {s, -4, 4, 1/2}] // TableForm // Quiet
Plot[{EulerGamma, 
  Sum[(n^(-1 + \[Epsilon]) g[-1 + \[Epsilon], n \[Epsilon]] + 
    n^(-1 - \[Epsilon]) g[-1 - \[Epsilon], n \[Epsilon]])/
   2, {n, 1, 1000}]}, {\[Epsilon], 0, 1}, Evaluated -> True]

enter image description here

enter image description here


In Terence Tao's blog post about relating divergent series to negative zeta values, he uses Euler-Maclaurin summation to show that if $\eta$ is smooth and compactly-supported, and $\eta(0)=1$, then $$ \sum_{n=0}^\infty n^s \eta(n/N)=C_{\eta,s} N^{s+1} + \zeta(-s) + O(1/N) $$ for any positive integer $s$, where $C_{\eta,s}=\int_0^\infty x^s \eta(x) \, dx$. In fact, it's possible to extend his proof to the case where $\eta$ is a Schwartz function — see below for a proof.

In particular, consider $\eta(x)=e^{-x}\cos x$. After a change of variables we can write your sum in the above form, with this $\eta$. Tao discusses the connection between this asymptotic form and the analytic continuation of the zeta function; I haven't gone through that part of his blog post in detail, but assuming that it, too, could be made to work when $\eta$ was Schwartz-class, that would presumably answer your first question about the connection between your sum and the zeta-regularized form of the original divergent sum.

To answer your second question, we can look at the leading coefficient $C_{\eta,s}=\int_0^\infty x^s e^{-x} \cos x \, dx$. After a bunch of annoying but straightforward integration by parts, it's possible to compute that $\int_0^\infty x^s e^{-x} \cos x=0$ when $s \equiv 1 \pmod{4}$, and $\int_0^\infty x^s e^{-x} \sin x=0$ when $s \equiv 3 \pmod{4}$. That is, whenever $s \equiv 1 \pmod{4}$, the $\eta(x)$ you've chosen coincidentally causes the leading term of the asymptotic expansion to drop out, exposing the zeta value which is the constant term of the expansion.

You can expose $\zeta(-4k)$ in a similar fashion by taking $\eta(x)=e^{-x} \frac{\sin x}{x}$, though since it's zero, that's not particularly interesting. For example, here's $\zeta(-4)$. Similarly, you can expose $\zeta(-4k+2)$ by taking $\eta(x)=e^{-x}(\cos x + \sin x)$. The interesting question — to which I don't know the answer — would be whether there's a nice choice of $\eta$ that exposes $\zeta(-4k+1)$, since that's the other case where it's actually nonzero. But this provides at least a partial answer to your third question.

Also note that if we take $\eta(x)=e^{-x}$, Tao's formula gives your "further information" expansion, as $\int_0^\infty x^n e^{-x} \, dx = n!$.


To prove Tao's formula whenever $\eta$ is smooth and rapidly decreasing, we'll use the infinite-series version of the Euler-Maclaurin formula (as found, e.g., in these notes): if $f(t)$ and all its derivatives tend to zero as $t \to \infty$, then $$ \sum_{n=0}^\infty f(n) = \int_{0}^\infty f(t) \, dt + \frac{1}{2} f(0) - \sum_{i=2}^k \frac{b_i}{i!} f^{(i-1)}(0)-\int_0^\infty \frac{B_k(\{1-t\})}{k!} f^{(k)}(t) \, dt $$ for any fixed integer $k$, where the $b_k$ are the Bernoulli numbers, the $B_k$ are the Bernoulli polynomials, and $\{x\}$ denotes the fractional part of $x$.

In our case, we set $f(t)=t^s \eta(t/N)$ and use the $(s+2)$nd-order expansion. Taking each term in the formula one-by-one, note that:

$$ \int_0^\infty f(t) \, dt=\int_0^\infty t^s \eta(t/N) \, dt = N^{s+1}\int_0^\infty x^s \eta(x) \, dx=C_{\eta,s}N^{s+1} $$ after making the substitution $x=t/N$; $$ \frac{1}{2}f(0)=0 $$ is immediate; \begin{align} -\sum_{i=2}^{s+2} \frac{b_i}{i!} f^{(i-1)}(0)&=-\frac{b_{s+1}}{(s+1)!} s! \eta(0)-\frac{b_{s+2}}{(s+2)!}\left(\frac{(s+1)!}{N} \eta'(0)\right)\\&=-\frac{b_{s+1}}{s+1} + O(1/N)\\&=\zeta(-s) + O(1/N) \end{align} as the first $s-1$ derivatives of $f$ vanish, and \begin{align} -\int_0^\infty \frac{B_{s+2}(\{1-t\})}{(s+2)!} f^{(s+2)}(t) \, dt &= -\int_0^\infty \frac{B_{s+2}(\{1-t\})}{(s+2)!} \frac{d^{s+2}}{dt^{s+2}}\left(t^s \eta(t/N)\right) \\ &=-\int_0^\infty \frac{B_{s+2}(\{1-Nx\})}{(s+2)!} \frac{1}{N^{s+2}}\frac{d^{s+2}}{dx^{s+2}}\left(N^sx^s \eta(x)\right) N \, dx \\ &=-\frac{1}{N} \int_0^\infty \frac{B_{s+2}(\{1-Nx\})}{(s+2)!} \frac{d^{s+2}}{dx^{s+2}}\left(x^s \eta(x)\right) \, dx \end{align} where we again make the substitution $x=t/N$.

This last quantity is bounded in magnitude by $\frac{1}{N}\frac{M}{(s+2)!}\int_0^\infty \frac{d^{s+2}}{dx^{s+2}}\left(x^s \eta(x)\right) \, dx$, where $M$ is the maximum value of $B_{n+2}(x)$ on $[0,1]$. Thus it too is $O(1/N)$ so long as that integral converges.

So Tao's formula works, not only for compactly supported $\eta$, but for $\eta$ such that $f(t)=t^s \eta(t)$ as well as all of $f$'s derivatives are integrable on $[0,\infty)$. In particular, it will work for all $s$ whenever $\eta$ is Schwartz on $[0,\infty)$.


Explanation for the Behavior of the Series (Parts $2$ and $3$)

Note that $$ \begin{align} \sum_{k=0}^\infty e^{ikx} &=\frac1{1-e^{ix}}\\ &=\frac12+\frac i2\cot\left(\frac x2\right)\\ &=\frac ix+\frac12-i\,\underbrace{\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)}_{\substack{\text{odd function of $x$}\\\text{analytic on $(-\pi,\pi)$}}}\tag{1} \end{align} $$


Odd Exponents

Taking $2n-1$ derivatives of $(1)$, we get $$ \sum_{k=0}^\infty k^{2n-1}e^{ikx}=\frac{(-1)^n(2n-1)!}{x^{2n}}+(-1)^n\underbrace{\frac{\mathrm{d}^{2n-1}}{\mathrm{d}x^{2n-1}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)}_{\substack{\text{even function of $x$}\\\text{analytic on $(-\pi,\pi)$}}}\tag{2} $$


Exponents which are $\boldsymbol{1\bmod{4}}$

If $n$ is odd, so that $2n-1\equiv1\pmod{4}$, substitute $x=(1-i)y$ in $(2)$ to get $$ \sum_{k=0}^\infty k^{2n-1}e^{k(1+i)y} =\underbrace{\frac{(2n-1)!}{(2i)^ny^{2n}}}_{\substack{\text{pure imaginary}\\\text{since $n$ is odd}}} -\underbrace{\frac{\mathrm{d}^{2n-1}}{\mathrm{d}x^{2n-1}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)}_{\text{plug in $x=(1-i)y$}}\tag{3} $$ Thus, the real part of $(3)$ is $$ \sum_{k=0}^\infty k^{2n-1}e^{ky}\cos(ky)=\left.-\frac{\mathrm{d}^{2n-1}}{\mathrm{d}x^{2n-1}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)\right|_{x=(1-i)y}\tag{4} $$ Taking the limit of $(4)$ as $y\to0$ yields the identity given in the question $$ \begin{align} &\text{When $n$ is odd; that is, }2n-1\equiv1\pmod{4}:\\ &\bbox[5px,border:2px solid #C0A000]{\lim_{y\to0^-}\sum_{k=0}^\infty k^{2n-1}e^{ky}\cos(ky)=\left.-\frac{\mathrm{d}^{2n-1}}{\mathrm{d}x^{2n-1}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)\right|_{x=0}}\tag{5} \end{align} $$


Exponents which are $\boldsymbol{3\bmod{4}}$

If $n$ is even, so that $2n-1\equiv3\pmod{4}$, the situation is a bit more complicated. We can still substitute $x=(1-i)y$ in $(2)$, getting $$ \sum_{k=0}^\infty k^{2n-1}e^{k(1+i)y} =\underbrace{\frac{(2n-1)!}{(2i)^ny^{2n}}}_{\text{real since $n$ is even}} +\underbrace{\frac{\mathrm{d}^{2n-1}}{\mathrm{d}x^{2n-1}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)}_{\text{plug in $x=(1-i)y$}}\tag{6} $$ We can also substitute $x=-iy$ in $(2)$, getting $$ \sum_{k=0}^\infty k^{2n-1}e^{ky}=\frac{(2n-1)!}{y^{2n}}+\underbrace{\frac{\mathrm{d}^{2n-1}}{\mathrm{d}x^{2n-1}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)}_{\text{plug in $x=-iy$}}\tag{7} $$ Taking a linear combination of $(6)$ and $(7)$ to cancel the singular part and maintain the constant part, we get $$ \begin{align} &\text{When $n$ is even; that is, }2n-1\equiv3\pmod{4}:\\ &\bbox[5px,border:2px solid #C0A000]{\lim_{y\to0^-}\sum_{k=0}^\infty k^{2n-1}e^{ky}\frac{2^n\cos(ky)-(-1)^{n/2}}{2^n-(-1)^{n/2}} =\left.\frac{\mathrm{d}^{2n-1}}{\mathrm{d}x^{2n-1}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)\right|_{x=0}}\tag{8} \end{align} $$


Even Exponents

Taking $2n$ derivatives of $(1)$, we get $$ \sum_{k=0}^\infty k^{2n}e^{ikx} =(-1)^n\frac{(2n)!}{x^{2n+1}}i -(-1)^ni\underbrace{\frac{\mathrm{d}^{2n}}{\mathrm{d}x^{2n}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)}_{\substack{\text{odd function of $x$}\\\text{analytic on $(-\pi,\pi)$}}}\tag{9} $$ Substitute $x=(1-i)y$ in $(9)$ to get $$ \sum_{k=0}^\infty k^{2n}e^{k(1+i)y} =-\frac{(2n)!}{y^{2n+1}}\frac{1+i}{(2i)^{n+1}} -(-1)^ni\underbrace{\frac{\mathrm{d}^{2n}}{\mathrm{d}x^{2n}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)}_{\text{plug in $x=(1-i)y$}}\tag{10} $$ We can also substitute $x=-iy$ in $(9)$ to get $$ \sum_{k=0}^\infty k^{2n}e^{ky} =-\frac{(2n)!}{y^{2n+1}} -(-1)^ni\underbrace{\frac{\mathrm{d}^{2n}}{\mathrm{d}x^{2n}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)}_{\text{plug in $x=-iy$}}\tag{11} $$ Taking a linear combination of $(10)$ and $(11)$ to cancel the singular part and maintain the constant part, we get $$ \bbox[5px,border:2px solid #C0A000]{\lim_{y\to0^-}\sum_{k=0}^\infty k^{2n}e^{ky}\frac{2^{n+1}\cos(ky)+(-1)^{\lfloor(n-1)/2\rfloor}}{2^{n+1}+(-1)^{\lfloor(n-1)/2\rfloor}}=0}\tag{12} $$ where $(-1)^{\lfloor(n-1)/2\rfloor}=\mathrm{Re}\left(\frac{i-1}{i^n}\right)$.


Unified Expression

Combining $(5)$, $(8)$, and $(12)$ yields $$ \bbox[5px,border:2px solid #C0A000]{\lim_{x\to0^-}\sum_{k=0}^\infty k^ne^{kx}\frac{2^{\lceil n/2\rceil}\cos(kx)+c_n}{2^{\lceil n/2\rceil}+c_n}=\zeta(-n)}\tag{13} $$ where $c_n$ is given by $$ c_n=-\tfrac14(-1)^{\lceil n/4\rceil}\left(2+(-1)^{\lceil n/2\rceil}\left(1-(-1)^n\right)\right) $$ or by $$ c_n=-(-1)^{\lceil n/4\rceil}\sin^2\left(\tfrac\pi4(n-1)\right) $$ or in the following table $$ \small\begin{array}{c|r|r|r|r|r|r|r} n\bmod8&1&2&3&4&5&6&7&8\\ \hline c_n&\hphantom{-}0&\hphantom{-}\frac12&\hphantom{-}1&\hphantom{-}\frac12&\hphantom{-}0&-\frac12&-1&-\frac12 \end{array}\tag{14} $$


The infamous $-\frac{1}{12}$ pops up as the coefficient of of a range of expansions, such as

$\dfrac{z}{{\mathrm e}^{-z}-1}=-1-\dfrac{1}{2}z-\dfrac{1}{12}z^2+{\mathcal O}(z^3)$

like in the Todd class or similarly in the Baker–Campbell–Hausdorff formula and so on. In your post, you are led to it by ${\mathrm e}^{kx}\cos(kx)$, which after the reparametrization $x\mapsto \log(z)$ I'd write as as $z^k\cos(\log(z^k))$. But in any case, the higher order expansion terms of the cosine make this a not so nice choice and you get a lot of cases where the limit procedure doesn't work. I've worked out some broader perspectives..

The analytical continuation, e.g. involved in defining the Zeta function*

$\zeta(s) = \dfrac{1}{\Gamma(s)} \int_{0}^{\infty} \dfrac{x ^ {s-1}}{{\mathrm e} ^ x - 1}\mathrm{d}x,$

is a process which smoothly probes local values of the functional expressions $f$ in $\sum_{k=0}^\infty f(k)$. You can formulate the regularization of the sum in a way that reflects that, e.g. by using a local mean like

$\langle f(k)\rangle:=\int_{k}^{k+1}f(k')\,{\mathrm d}k'$.


So the limit $\lim_{z\to 1}$ of the sum

$0+1\,z^1+2\,z^2+3\,z^3+\dots$

diverges, because of the pole in

$\sum_{k=0}^\infty k\,z^k=z\dfrac{{\mathrm d}}{{\mathrm d}z}\sum_{k=0}^\infty z^k=z\dfrac{{\mathrm d}}{{\mathrm d}z}\dfrac{1}{1-z}=\dfrac{z}{(z-1)^2}, \hspace{1cm} z\in(0,1)$

So let's consider the sum of smooth deviations. With

$\langle k\,z^k\rangle=z\dfrac{{\mathrm d}}{{\mathrm d}z}\langle z^k\rangle=z\dfrac{{\mathrm d}}{{\mathrm d}z}\langle {\mathrm e}^{k \log(z)}\rangle=\left.z\dfrac{{\mathrm d}}{{\mathrm d}z}\dfrac{z^{k'}}{\log(z)}\right|_{k}^{k+1}$.

we find the sum $\sum_{k=0}^n\langle k\,z^k\rangle$ involves canceling upper and lower bounds and we're left with $\dfrac{z^0}{\log(z)^2}$ plus terms suppressed by $z^n$. Finally, using the expansion

$r^2\dfrac{1}{\log(1+r)^2}=\dfrac{1}{1-r+\left(1-\frac{1}{1!\,2!\,3!}\right)r^2+{\mathrm{O}}(r^3)}=1+r+\dfrac{1}{1!\,2!\,3!}r^2+{\mathrm{O}}(r^3),$

we find

$\sum_{k=0}^\infty \left(k\,z^k-\langle k\,z^k\rangle\right)=\dfrac{z}{(z-1)^2}-\dfrac{1}{\log(z)^2}=-\dfrac{1}{12}+{\mathcal O}\left((z-1)^1\right).$


The picture shows the two functions $\dfrac{z}{(z-1)^2}$ and $\dfrac{1}{\log(z)^2}$, as well as their difference (blue, red, yellow). While the functions themselves clearly have a pole at $z=1$, their difference converges against

$$-\frac{1}{1!\,2!\,3!}=-\frac{1}{12}=-0.08{\dot 3}.$$

enter image description here


And here's the Mathematica calculation for higher $n$, as well as the corresponding values of the Zeta function (which match)

enter image description here


I've tried to generalize those in various directions and stumbled upon some futher relations. For example, when comparing finite differences to their first order approximations, we find

$\dfrac{f'(x)\,h}{f(x+h)-f(x)}=1-\dfrac{f''(x)}{2!}\left(\dfrac{h}{f'(x)}\right)+\left(\dfrac{f''(x)\,f''(x)}{2!\,2!}-\dfrac{f'(x)\,f'''(x)}{1!\,3!}\right)\left(\dfrac{h}{f'(x)}\right)^2+{\mathcal O}(h^3).$

Note that $\dfrac{1}{2!\,2!}-\dfrac{1}{1!\,3!}=\dfrac{1}{2!\,3!}(3-2)=\dfrac{1}{12}$, which is the coeffient $\dfrac{1}{2!}B_2$ in the related MacLaurin formula

And the log-substraction scheme can be performed more generally for higher order poles $\frac{1}{(z-1)^n}$ using

$\dfrac{1}{\log(z)^n}=\dfrac{1}{(z-1)^n}\left(1+\frac{n}{2}(z-1)+\frac{n}{2}\frac{3n-5}{12}(z-1)^2+\frac{n}{2}\frac{(n-2)(n-3)}{24}(z-1)^3+\dots\right)$

Here, e.g. plug in $n=2$ to get $\dfrac{n}{2}\dfrac{3n-5}{12}(z-1)^2=\dfrac{1}{12}$.

*As I see from your profile you're interested a physics (I'm a physicist myself) I like to point out how the body of the integrand in that Zeta function representation is the expression of Planck's law :). The exponential function ${\mathrm e}^x\cdot{\mathrm e}^y={\mathrm e}^{x+y}$, say in statistical physics where $-\frac{1}{12}$ come to play a role, enters here because the probabilities of divided systems behave multiplicative, while the energy, by definition, behaves additive.