Factorial of infinity
Solution 1:
Let me try to explain what is going on with a simpler example. Consider geometric series $$F(s)=\sum_{n=0}^{\infty}s^n,$$ which converges inside the unit circle $|s|<1$ to the function $F(s)=\frac{1}{1-s}$. Note, however, that $F(s)$ can be analytically continued to the whole complex plane with excluded point $s=1$, which is a simple pole of $F(s)$. This point was in fact the origin of radius of convergence equal to $1$. So we are tempted to write identities like $$ F(2)=1+2+4+8+\ldots=\frac{1}{1-2}=-1.$$ It is important to understand that they do not "really" hold. The logic is the following: the initial series defines some function in a "small" domain, but then sometimes this function can be (uniquely!) continued to a larger domain where the original series no longer makes sense. In general, this continuation will have singularities.
Sometimes - actually, almost always - one cannot construct a continuation to the whole complex plane with isolated singular points because of specific singularities called natural boundaries. An example of this is encountered if we consider instead of $F(s)$ the series $$G(s)=\sum_{n=0}^{\infty}s^{n^2},$$ which converges inside the unit circle even better than $F(s)$ but cannot be continued elsewhere: $|s|=1$ is a dense set of singularities for $G(s)$.
Now let us go back to initial question. If we consider the series $$\zeta(s)=\sum_{n=1}^{\infty}n^{-s},$$ it converges to some function $\zeta(s)$ in the halfplane $\mathrm{Re}\,s>1$. It turns out that this function can be (again, uniquely!) continued to the whole complex plane except the point $s=1$ where it has simple pole, similarly to our previous example. This continued function, as well as its derivatives, can be evaluated for any $s\neq1$ (for example, at $s=-1$ and $0$). One of the ways to do it is to systematically use integral representation $$\Gamma(s)\zeta(s)=\int_0^{\infty}\frac{x^{s-1}dx}{e^x-1},$$ which for $\mathrm{Re}\,s>1$ reproduces the original series definition.
Solution 2:
One way to crosscheck (not prove/disprove (!)) that results approximative and numerically is to use the alternating zeta at the same parameter, which can be summed using simple summation procedures for divergent summation. If we write $$ \eta(s) = \frac 1{1^s} - \frac 1{2^s} + \frac 1{3^s} - \cdots $$ and furtherly $$ \eta(s) = \exp(-s\ln 1) - \exp(- s\ln2) + \exp(-s\ln 3) - \cdots $$
then we can write the derivative as $$ \eta(s)' = (-\ln1)\cdot \exp(-s\ln 1) - (-\ln2)\cdot \exp(- s\ln2) + (-\ln3)\cdot \exp(-s\ln 3) - \cdots $$ and letting $\lim s \to 0$ this gives $$ \begin{eqnarray} \eta(0)' &=& (-\ln1) - (-\ln2) + (-\ln3) - \cdots \\ \eta(0)' &=& - (\ln1 - \ln2 + \ln3 - \cdots) \tag 1 \end{eqnarray}$$ The latter can be handled by a simple procedure for divergent summation; for instance Euler-summation $ \mathfrak E $ which gives then $$ \eta(0)' \underset{\mathfrak E}{=} 0.225791352645\ldots $$ Now in (1) we see the alternating signs but for the equivalent value of the zeta-function this series is non-alternating. So $$ \zeta(0)' = - (\ln1 + \ln2 + \ln3 + \cdots) \tag 2$$ We assume now, that the value $ \zeta(0)' = s$ exists and that we can handle it into the divergent sum. Here we allude to Euler's trick of the transformation-formula between zeta() and eta() and exploit that the logs at even arguments are negative in the eta-formula. But $ \ln(2m) = \ln2 + \ln m$ and thus we might write $$ \begin{eqnarray} s &=& - ( &(\ln 1 + \ln 2 + \ln 3 + \ln 4 + \ln 5 + \ln 6 + \ln 7\cdots) \\ s &=& - ( &(\ln 1 - \ln 2 + \ln 3 - \ln 4 + \ln 5 - \ln 6 + \ln 7\cdots) \\ & & &+2 \cdot ( \ln 2 + \ln 4 + \ln 6 + \ln 8 + \cdots) ) \\ s&=& &\eta(0)' - 2 \cdot ( (\ln2 + \ln1) + (\ln2 + \ln 2) + (\ln 2 + \ln 3) + \cdots)) \\ s&=& &\eta(0)' - 2 \cdot ( \zeta(0) \cdot \ln 2 - s ) \tag 3\\ \end{eqnarray} $$ where we have accumulated the constant $\ln 2$ which occurs as summand at each consecutive logarithm in the last equation (3) as $\zeta(0) \cdot \ln 2$. Then we have a determination for $s$ by $$ \begin{eqnarray} &s&=& \eta(0)' - 2 \cdot ( \zeta(0) \cdot \ln 2 - s ) \\ &-s&=& \eta(0)' - 2 \cdot \zeta(0) \cdot \ln 2 \\ \zeta(0)'&=s&=& -\eta(0)' - \ln 2 \tag 4 \end{eqnarray}$$ $$\begin{eqnarray} &\zeta(0)' &\sim - 0.225791352645\ldots - 0.693147180560 \\ && \sim -0.918938533205\ldots \tag 5 \\ \end{eqnarray}$$ And indeed, numerically we get using Pari/GP
log(1/sqrt(2*Pi))
%393 = -0.918938533205
and also
h=1e-12;(zeta(0+h/2)-zeta(0-h/2))/h
%394 = -0.918938533205
This all required assumptions about the general summability of the divergent terms. But there is a fairly well developed theory of divergent summation, from which one may take the confirmation about the acceptability of the above transformations. They are intended to show one path to obtain the result; perhaps/likely there are others more elegant or even rigid ones.
Appendix: The Pari/GP code for the numerical evaluation of $\eta(0)'$ in (3) is
-sumalt(k=0,(-1)^k*log(1+k))
%387 = 0.225791352645