What is $\lim_{x\to1^{-}} (1-x)\left(\sum_{i=0}^{\infty} x^{i^2}\right)^{2}$?
This is the problem I was presented with: $$\text{Given } f(x) = \sum_{i=0}^{\infty} x^{i^2} ,$$ $$\text{find } \lim_{x\to1^{-}} (1-x)f(x)^2 .$$ At first, I tried figuring out a generating function for $f(x)$, squaring it, and multiplying it by $(1-x)$. I then learned that $f(x)$ does not have a traditional generating function connected to it. I then tried dividing $f(x)^2$ by $(1+x+x^2+x^3+x^4+x^5+\cdots)$, the expanded form of $\frac{1}{1-x}$. I couldn't figure out how to make that work out either. Is there some way I should simplify $f(x)$ to make this an easier limit to solve?
I believe to have already solved this question, but I am not managing to find the duplicate, so I will re-sketch my argument. By squaring $f(x)$ we get $$ f(x)^2 = \sum_{n\geq 0} \widetilde{r}_2(n) x^n $$ where $$ r_n(n)=\left|\{(a,b)\in\mathbb{Z}^2: a^2+b^2=n\}\right|,\qquad \widetilde{r}_n(n)=\left|\{(a,b)\in\mathbb{N}^2: a^2+b^2=n\}\right|$$ and by Gauss circle problem $$ \sum_{n=0}^{N}r_2(n) = \pi N + O(\sqrt{N}) $$ (the error term can be improved into $O(N^{1/3})$ by Voronoi summation formula, but the previous elementary bound is sufficient for our purposes), such that $$ R(N) = \sum_{n=0}^{N}\widetilde{r}_2(n) = \frac{\pi}{4}N +O(\sqrt{N}) $$ and $$ \frac{f(x)^2}{1-x} = \sum_{N\geq 0} R(N) x^N = \frac{\frac{\pi}{4}x}{(1-x)^2}+O\left(\frac{1}{(1-x)^{3/2}}\right)$$ as $x\to 1^-$. Multiplying both sides by $(1-x)^2$ and considering the limit as $x\to 1^-$ we get: $$ \lim_{x\to 1^-}(1-x)f(x)^2 = \color{red}{\frac{\pi}{4}}. $$
An alternative approach comes from considering the convolution with an approximate identity.
In general
$$ \left(\sum_{n\geq 0} x^{n^k}\right)^k \sim \frac{\Gamma\left(1+\frac{1}{k}\right)^k}{1-x} $$
as $x\to 1^-$. Oh, now I have found the answer I was mentioning before, containing a (sketch of) proof of the last statement, too.
Let me check my luck with that ...
$$\lim_{x\to 1^{-}} (1-x)f^2(x)=\lim_{a\to -1} \lim_{x\to 1^{-}} (1-x)\left(\int_0^{\infty} x^{-a t^2}\textrm{d}t\right)^2=\lim_{a\to -1} \lim_{x\to 1^{-}}\frac{\pi}{4a}\frac{1-x}{\log(x)}=\frac{\pi}{4}.$$
The problem can be extended in more ways. One example is
$$\lim_{x\to 1^{-}} (1-x)g^3(x)=\lim_{x\to 1^{-}} (1-x)\left(\sum_{i=0}^{\infty} x^{i^3}\right)^3=\frac{1}{27}\left(\Gamma\left(\frac{1}{3}\right)\right)^3,$$ where the method described above works perfectly.
So, in the generalized form,
$$\lim_{x\to 1^{-}} (1-x)\left(\sum_{i=0}^{\infty} x^{i^n}\right)^n=\frac{1}{n^n}\left(\Gamma\left(\frac{1}{n}\right)\right)^n.$$
When letting to $n \to \infty$, $$\lim_{n\to\infty} \lim_{x\to 1^{-}} (1-x)\left(\sum_{i=0}^{\infty} x^{i^n}\right)^n=\frac{1}{e^{\gamma}}.$$
Since $x^{t^k}=e^{-\log(1/x)\,t^k}$, we have for $x\lt1$, $$ x^{(n+1)^k} \le\int_n^{n+1}e^{-\log(1/x)\,t^k}\,\mathrm{d}t \le x^{n^k}\tag1 $$ Summing in $n$ yields $$ \sum_{n=1}^\infty x^{n^k}\le\int_0^\infty e^{-\log(1/x)\,t^k}\,\mathrm{d}t\le\sum_{n=0}^\infty x^{n^k}\tag2 $$ multiplying by $\log(1/x)^{1/k}$ gives $$ \int_0^\infty e^{-t^k}\mathrm{d}t\le\log(1/x)^{1/k}\sum_{n=0}^\infty x^{n^k}\le\log(1/x)^{1/k}+\int_0^\infty e^{-t^k}\mathrm{d}t\tag3 $$ Since $$ \lim_{x\to1}\frac{\log(1/x)}{1-x}=1\tag4 $$ the Squeeze Theorem and $(3)$ say that $$ \begin{align} \lim_{x\to1^-}(1-x)^{1/k}\sum_{n=0}^\infty x^{n^k} &=\lim_{x\to1^-}\log(1/x)^{1/k}\sum_{n=0}^\infty x^{n^k}\\ &=\int_0^\infty e^{-t^k}\mathrm{d}t\\[3pt] &=\frac1k\int_0^\infty e^{-t}t^{\frac1k-1}\mathrm{d}t\\[3pt] &=\frac1k\Gamma\!\left(\frac1k\right)\\[6pt] &=\Gamma\!\left(1+\frac1k\right)\tag5 \end{align} $$ That is, $$ \lim_{x\to1^-}(1-x)\,\left(\sum_{n=0}^\infty x^{n^k}\right)^k=\Gamma\left(1+\frac1k\right)^k\tag6 $$
For $k=2$, $(6)$ yields $$ \begin{align} \lim_{x\to1^-}(1-x)\,\left(\sum_{n=0}^\infty x^{n^2}\right)^2 &=\Gamma\left(\frac32\right)^2\\ &=\frac\pi4\tag7 \end{align} $$
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &\bbox[10px,#ffd]{\lim_{x \to 1^{\large -}}\bracks{\pars{1 - x} \pars{\sum_{i = 0}^{\infty}x^{i^{\large 2}}}^{2}}} = \lim_{x \to 0^{\large +}}\braces{x \bracks{\sum_{i = 0}^{\infty}\pars{1 - x}^{i^{\large 2}}}^{2}} \\[5mm] = &\ \lim_{x \to 0^{\large +}}\braces{x \bracks{\sum_{i = 0}^{\infty}\exp\pars{i^{2}\ln\pars{1 - x}}}^{2}} \\[5mm] = &\ \lim_{x \to 0^{\large +}}\braces{x \bracks{\int_{0}^{\infty}\exp\pars{-x\,i^{2}}\,\dd i}^{2}} \\[5mm] = &\ \lim_{x \to 0^{\large +}}\bracks{x \pars{{1 \over 2}\root{\pi \over x}}^{2}} = \bbx{\pi \over 4} \end{align}
See $\ds{\quad\underline{\textsf{Laplace Method for Sums}}\quad }$ in Analytic Combinatorics by Philippe Flajolet and Robert Sedgewick, Cambridge University Press $2009$.