An outrageous way to derive a Laurent series: why does this work?
I had to compute a series expansion of $1/(e^{x}-1)$ about $x=0$, and in the course of its derivation, I made a couple of manipulations that are not allowed mathematically. Still, comparing the final result against Maple showed that it was right.
The following is what I did:
\begin{equation} \begin{split} \frac{1}{e^{x}-1} &= \frac{e^{-x}}{1-e^{-x}} = \sum_{n=1}^{\infty} e^{-nx} \\ &= \sum_{n=1}^{\infty} \sum_{k=-\infty}^{\infty}\frac{(-nx)^{k}}{\Gamma(k+1)}\\ &= \sum_{k=-\infty}^{\infty}(-1)^{k}\frac{x^{k}}{\Gamma(k+1)}\sum_{n=1}^{\infty} n^{k}\\ &= \sum_{k=-\infty}^{\infty}(-1)^{k}\frac{\zeta(-k)}{\Gamma(k+1)} x^{k}\\ &= \sum_{k=-1}^{\infty}(-1)^{k}\frac{\zeta(-k)}{\Gamma(k+1)} x^{k}\\ \end{split} \end{equation} I naively exchanged the order of summation to obtain the third line. Then, I pretended that the summation over $n$ converged (i.e., as if $k$ were always smaller than -1), and replaced it by the Riemann zeta function. Notice that whenever $1/\Gamma(k+1) = 0$ , the coefficient of $x^{k}$ vanishes except when $k=-1$, for which the poles of gamma and zeta functions cancel out and give a finite value.
I suppose that there is a deeper reason that such unreliable steps led me to the correct result? My guess is that it has to do with the analytic structure of the summand as a function of $k$, but I haven't been able to figure out in detail why and when this works.
I'll appreciate any insights on this.
Let us assume ${\rm Re}(x)>0$ so that the geometric series is convergent. Perhaps the most intuitive way to justify OP's calculation is to use the Mellin transform. The Mellin integration coutour $\gamma$ in the complex $s$-plane is a vertical line ${\rm Re}(s)=c$ directed upward. The positive constant $c>1$ is chosen positive enough to justify switching the order of $s$-integration and $n$-summation below. Then we can mimick OP's calculation as follows.
\begin{equation} \begin{split} \frac{1}{e^{x}-1} ~&=~ \frac{e^{-x}}{1-e^{-x}} ~\stackrel{{\rm Re}(x)>0}{=}~ \sum_{n\in\mathbb{N}} e^{-nx} \\ ~&=~\sum_{n\in\mathbb{N}} \sum_{k\in\mathbb{N}_0} \frac{(-nx)^k}{k!}\\ ~&=~\sum_{n\in\mathbb{N}} \sum_{k\in\mathbb{N}_0} {\rm Res}\left(s\mapsto\Gamma(s)(nx)^{-s} ,-k\right) \\ ~&=~\sum_{n\in\mathbb{N}}\int_{\gamma}\! \frac{\mathrm{d}s}{2\pi i} \Gamma(s)(nx)^{-s} \\ ~&=~\int_{\gamma}\! \frac{\mathrm{d}s}{2\pi i} \sum_{n\in\mathbb{N}}\Gamma(s)(nx)^{-s} \\ ~&\stackrel{c>1}{=}~\int_{\gamma}\! \frac{\mathrm{d}s}{2\pi i}\Gamma(s)\zeta(s)x^{-s} \\ ~&=~ \sum_{k=-1}^{\infty} {\rm Res}\left(s\mapsto \Gamma(s)\zeta(s)x^{-s} ,-k\right) \\ ~&=~\frac{1}{x}+ \sum_{k\in\mathbb{N}_0} \frac{\zeta(-k)(-x)^k}{k!}.\\ \end{split} \end{equation}
Here we have (among other things) used:
that the Gamma function $\Gamma(s)$ has poles at the non-positive integers $s=-k$, $k\in\mathbb{N}_0$, with residue ${\rm Res}(\Gamma,-k)=\frac{(-1)^k}{k!}$, and
that the Riemann zeta function $\zeta(s)$ has a pole at $s=1$ with residue ${\rm Res}(\zeta,1)=1$.