Has someone seen a discussion of the (divergent) summation of $\sum\limits_{k=0}^\infty (-1)^k (k!)^2 $?
In extending my studies of the Eulerian matrix and its suitability for a matrix-based divergent summation procedure I'm trying to proceed to sums of the form $$ S = \sum_{k=0}^\infty (-1)^k (k!)^2 $$ The intermediate expressions become quite complicated, but when hoping for possible external material for crosschecking it is difficult to search for that mathematical expression via Google, so I'm asking here: does someone know about a discussion of this type series, or at least of a documented result?
Perhaps this is doable using an iterated Borel summation, which I've seen been suggested by K. Knopp in his series monograph. However, having not even fully managed that Borel summation I've no idea, whether this were a way at all. I'd like to have something to crosscheck that my proceedings do not lead completely into the desert...
[update after the answer of J.M.] With another matrix-based summation-procedure I had arrived at the following sequence of partial sums, based on 128 terms for the matrix-based transformation: $ \displaystyle \small \begin{array} {r|l} n=128 & PkPowSum(2.8,1.9) \\ k & partial sum\\ \hline ... & ... \\ 124 & 0.668091303950 \\ 125 & 0.668091305718 \\ 126 & 0.668091307376 \\ 127 & 0.668091308931 \\ 128 & 0.668091310389 \end{array} $
and a today-extension $ \displaystyle \small \begin{array} {r|l} n=256 & PkPowSum(2.9,1.4) \\ k & partial sum\\ \hline ... & ... \\ 252 & 0.66809132638274015684 \\ 253 & 0.66809132638194228879 \\ 254 & 0.66809132638120217436 \\ 255 & 0.66809132638051649370 \\ 256 & 0.66809132637988209645 \end{array} $
However, this was not yet employing the Eulerian transformation, and also I was not sure whether it would be meaningful at all. But I can now crosscheck that method and that with the "Eulerian transformation" with the much accurate result by J.M.
(for my own future reference: the parameters of approximation were PkPowSum(2.8,1.9))
Hopefully, this answer is not too late. Neither of the previous answers seemed to be entirely satisfactory, and I hope this might be useful.
Consider the Mellin transform identity
$$\int_0^\infty u^k (2\,K_0(2\sqrt u))\,\mathrm du=(k!)^2$$
where $K_0(z)$ is the modified Bessel function of the second kind.
Now, recall the fundamental (formal) relationship for the Stieltjes moment problem:
$$\int_0^\infty\frac{\mathrm d\,\eta(t)}{z+t}=\sum_{k=0}^\infty \frac{(-1)^k \eta_k}{z^{k+1}}$$
where $\eta$ is some positive measure on $(0,\infty)$ and $\eta_k=\int_0^\infty t^k \mathrm d\,\eta(t)$ are the corresponding Stieltjes moments. Making the appropriate replacements, we have
$$2\int_0^\infty\frac{K_0(2\sqrt t)\,\mathrm dt}{z+t}=\sum_{k=0}^\infty \frac{(-1)^k (k!)^2}{z^{k+1}}$$
In particular, for $z=1$, we have the formal identity
$$2\int_0^\infty\frac{K_0(2\sqrt t)\,\mathrm dt}{t+1}=\sum_{k=0}^\infty (-1)^k (k!)^2$$
Mathematica tells me that the integral can be expressed in terms of the Meijer $G$-function:
$$2\int_0^\infty\frac{K_0(2\sqrt t)\,\mathrm dt}{z+t}=G_{1,3}^{3,1}\left(z\mid{{0}\atop{0,0,0}}\right)$$
Here is a Mathematica verification that the function given above has the appropriate asymptotic expansion:
Series[MeijerG[{{0}, {}}, {{0, 0, 0}, {}}, z], {z, ∞, 10}]
1/z - (1/z)^2 + 4/z^3 - 36/z^4 + 576/z^5 - 14400/z^6 + 518400/z^7 -
25401600/z^8 + 1625702400/z^9 - 131681894400/z^10 + O[1/z]^11
% - Sum[((-1)^k (k!)^2)/z^(k + 1), {k, 0, 10}]
O[1/z]^11
Either numerical evaluation of the integral or the Meijer $G$ expression yields the result
$$G_{1,3}^{3,1}\left(1\mid{{0}\atop{0,0,0}}\right)\approx0.668091326377777765433014987501$$
I have been asked how I came to the Mellin transform at the beginning of this post. Here's how I did it.
I started with the inverse problem; that is, I needed the inverse Mellin transform of $\Gamma(s+1)^2$:
$$\frac1{2\pi i}\int_{c-i\infty}^{c+i\infty}\Gamma(1+s)^2 t^{-s}\,\mathrm ds$$
This can be recognized as a particular case of the Meijer $G$-function, if you compare this with the definition. In particular,
$$\frac1{2\pi i}\int_{c-i\infty}^{c+i\infty}\Gamma(1+s)^2 t^{-s}\,\mathrm ds=G_{0,2}^{2,0}\left(t\mid{{—}\atop{1,1}}\right)$$
As it turns out, this case was listed in the Wolfram Functions site:
$$G_{0,2}^{2,0}\left(t\mid{{—}\atop{1,1}}\right)=2t\,K_0(2\sqrt t)$$
Inverting this relationship gives the Mellin transform identity for the square of the factorial.
is borel summable...provided the integral
$$ S= \int_{0}^{\infty}dt \exp(-t) \int_{0}^{\infty} \frac{e^{-u}}{1+ut} $$ exists
because $$ \sum_{n=0}^{\infty}(-1)^{n}u^{n} $$ is BOREL SUMMABLE to the exponential integral
$$ \int_{0}^{\infty}dt \frac{e^{-t}}{1+xt} =E(x) $$
or you can use Mellin convolution $ M(e^{-t}*e^{-t})= (s)!^{2} $ defined by
$$ (f * g)(x) = \int_{0}^{\infty} f(t)g(t^{-1}x)dt. $$
The use of the "Eulerian-summation" procedure seems to arrive at the same value which J.M. has provided. It seems to be a weak procedure; doubling the number of involved terms gives only two more digits precision; here is a table (correct digits before the centered-dots): $$ \small \begin{array} {r|l} \text{n terms} & \text{partial sums} \\ \hline 16 & 0.668\cdot 10011598092309157 \\ 32 & 0.66809\cdot 094424944193264 \\ 63 & 0.66809132\cdot 812814354929 \\ 64 & 0.66809132\cdot 829512992943 \\ 128 & 0.66809132637\cdot 441027814 \\ \hline JM & 0.668091326377777765433014987501 \end{array} $$
This "Eulerian summation" involves the computation of the integral
$ \displaystyle f(x) = \int_0^\infty {e^{-u} \over 1 - ux} du $ which is asymptotic to
$ f(x) \sim 1+1x+2!x^2+3!x^3+4!x^4 + \ldots$ and its derivatives (with respect to x, not to u(!)) at negative integers, which can advantageously be expressed by some polynomials (and OEIS) in $ {1 \over x^k}$ cofactored with the original computation of $f(x)$ only. However, the numerical integration is rather time consuming, so the whole procedure seems to be somehow inefficient and is thus finally practically uninteresting.
I'll "accept" now @J.M.'s answer because that gave the first trustworthy result.
Why it might still be a principally interesting procedure should then be discussed elsewhere.
I wanted to add onto the existing answer a more straightforward/systematic way to achieve the same answer as JM using Borel's method.
The idea is to use the fact that $(n!)^2 = n!n!$ to create two integrals using Borel's method, and then we are only left with a geometric series.
Also, in seeking to make this a bit more general, I'll be including the $x^n$ term to look at this sum in terms of power series (though it's not needed).
Starting with $$\sum_{n=0}^{\infty} (n!)^2 x^n$$ We create the two interals to arrive at $$\sum_{n=0}^{\infty} \frac{(n!)^2}{n!n!} \left(\int_0^\infty e^{-t}t^ndt\right) \left(\int_0^\infty e^{-t_2}t_2^ndt_2\right) x^n$$
Doing the usual non-rigorous step of interchanging summation and integration gives $$\int_0^\infty \int_0^\infty e^{-(t+t_2)}\sum_{n=0}^{\infty} (tt_2x)^n dtdt_2$$ Simplifying into the geometric series gives $$\int_0^\infty \int_0^\infty \frac{e^{-(t+t_2)}}{1-tt_2x} dtdt_2$$
And indeed evaluating this on Desmos gives $0.6680913637$ at $x=-1$. For any x negative, this integral makes sense and a graph of the integral looks like this
There's also no reason we can't do this same process to get $ \sum_{n=0}^\infty (-1)^n(n!)^k$
For instance, at k=3, we get $$\int_0^\infty \int_0^\infty \int_0^\infty \frac{e^{-(t+t_2+t_3)}}{1-xt_1t_2t_3}dtdt_2dt_3 $$
Interestingly, graphing the different powers of k shows the function converging into the constant y=1 function (k=1 is on the bottom, and k=4 is on top). In some ways, this intuitively makes sense. If divergent functions are asymptotic (i.e. able to be approximated by the first few terms in their Taylor series), then a function whose second term goes to infinity must be able to be approximated by just its first term. In general, it does appear that very fast-growing divergent series begin to look like constant series.
EDIT: I'd like to add a few things I've noticed recently that give an interesting connection between the value assigned to these divergent series and differential equations.
Take the differential equation $$f'' -\frac{1}{x}f - \frac{1}{x^2} = 0$$ We start by assuming that $f$ has the form $f(x) = \sum_{n=0}^\infty a_n x^{-n}$. Formally solving for the coefficents gives $$\sum_{n=0}^\infty a_n (n)(n+1)x^{-(n+2)} = \sum_{n=0}^\infty a_n x^{-(n+1)} - \frac{1}{x^2} \implies a_n = (n-1)!^2 n$$ So the differential equation is formally solved by $$\sum_{n=1}^\infty (n-1)!^2 n x^{-n}$$
Our goal is obtain a value for this divergent series using the divergent series $F(x)=\sum_{n=0}^\infty n!^2 x^{n}$. We can achieve this by doing $G(x)=x\frac{d}{dx}(-xF(x))$, so that $G(\frac{1}{x})=\sum_{n=1}^\infty (n-1)!^2 n x^{-n}$
So, we should have the convergent version of G(x) as $$G(x) = -x\frac{d}{dx}\left(x\int_{0}^{\infty}\int_{0}^{\infty}\frac{e^{-\left(t_{1}+t_{2}\right)}}{1-t_{1}t_{2}x}dt_{1}dt_{2}\right)$$
Therefore, we should expect that our initial differential equation is solved by $f(x) = G(\frac{1}{x})$. And indeed, this is a correct. The graph of f(x) looks like this (the black line in middle if f(x), and the black and dotted line are the $f''(x)$ and $\frac{1}{x}f(x) + \frac{1}{x^2}$ Similar differential equations have connections to higher powers of $m$ in $(n!)^m$. For instance, the differential equation $-f'''(x) -\frac{1}{x^2}f(x) - \frac{1}{x^3}=0$ is formally solved by $\frac{1}{x} + \sum_{n=2}^\infty \frac{(n-1)!^3}{2^2}n^2(n+1)x^{-n}$, which may also be obtained from the divergent series $\sum_{n=0}^\infty (-1)^n n!^3 x^n$ in order to find a solution.