Proving the identity $\sum_{n=-\infty}^\infty e^{-\pi n^2x}=x^{-1/2}\sum_{n=-\infty}^\infty e^{-\pi n^2/x}.$
Can you help prove the functional equation: $$\sum_{n=-\infty}^\infty e^{-\pi n^2x}=x^{-1/2}\sum_{n=-\infty}^\infty e^{-\pi n^2/x}.$$
Specifically, I am looking for a solution using complex analysis, but I am interested in any solutions.
Thanks!
Solution 1:
Define $\theta(t) = \sum_{n \in \mathbb{Z}} e^{-\pi i n^{2} t}$. The identity that you quote is the Jacobi theta functional identity \begin{align} \theta(t) = t^{-1/2} \theta(t^{-1}). \end{align} This identity can be used to prove the Riemann zeta functional identity. To prove it, first recall that the Fourier transform of an integrable function $f \colon \mathbb{R} \to \mathbb{C}$ is simply \begin{align} \tilde{f}(s) = \int_{\mathbb{R}} f(x) e^{2 \pi i x s} dx. \end{align} and (presupposing that $f$ is also uniformly continuous) the Poisson summation formula is the identity \begin{align} \sum_{n \in \mathbb{Z}} \tilde{f}(n) = \sum_{n \in \mathbb{Z}} f(n). \end{align} Now, recall the integral \begin{align} e^{-\pi s^{2}} = \int_{\mathbb{R}} e^{-\pi x^{2}} e^{2 \pi i x s} dx \end{align} Observe that \begin{align} \sum_{n \in \mathbb{Z}} e^{- \pi n^{2} s^{2}} = \sum_{n \in \mathbb{Z}} \int_{\mathbb{R}} e^{-\pi x^{2}} e^{2 \pi i x (ns)} dx = \sum_{n \in \mathbb{Z}} s^{-1} \int_{\mathbb{R}} e^{-\pi (x^{\prime}/s)^{2}} e^{2 \pi i n x^{\prime}} dx^{\prime}, \end{align} where we have changed variables, $x^{\prime} = x s$. Now use the fourier transform, \begin{align} \sum_{n \in \mathbb{Z}} s^{-1} \int_{\mathbb{R}} e^{-\pi (x^{\prime}/s)^{2}} e^{2 \pi i n x^{\prime}} dx^{\prime} = s^{-1} \sum_{n \in \mathbb{Z}} e^{- \pi (n / s)^{2}}. \end{align} Take $s = \sqrt{t}$ and conclude the result.
Solution 2:
Hint: Apply the Poisson Summation Formula to a suitable function.
Solution 3:
The following is not a proof but it gives some physical intuition why such a formula is true.
If at time $t=0$ a unit of heat is concentrated at $x=0$ and dissipates according to the heat equation $u_t=u_{xx}$ along the $x$-axis then the temperature $x\mapsto u(x,t)$ is a Gaussian becoming flatter and flatter as $t$ increases: $$u(x,t)={1\over\sqrt{4\pi t}}e^{-x^2/(4t)}\qquad(t>0)\ .$$ Now if at time $t=0$ we have such a unit of heat at each integer point $k$ then the resulting temperature will be $$u(x,t)={1\over\sqrt{4\pi t}}\sum_{k\in{\mathbb Z}}e^{-(x-k)^2/(4t)}\ .$$ In particular the temperature at $x=0$ will be $$U(t)={1\over\sqrt{4\pi t}}\sum_{k\in{\mathbb Z}}e^{-k^2/(4t)}\qquad(t>0)\ .\qquad {\rm (a)}$$ On the other hand the process considered here is periodic in $x$ with period $1$. So the temperature $u(x,t)$ must have a description of the form $$u(x,t)=\sum_{k\in{\mathbb Z}}a_k(t)e^{2\pi i k x}$$ for certain functions $a_k(t)$. Plugging this into the heat equation gives $a_k(t)=c_k \exp(-4\pi^2 k^2 t)$ for constants $c_k$, so that we now have $$u(x,t)=\sum_{k\in{\mathbb Z}}c_k \exp(-4\pi^2 k^2 t)e^{2\pi i k x}\ .$$ The $c_k$ have to be determined by the initial condition which is a delta-function at $x=0$. Here we have to cheat a little: We replace the delta-function by a rectangle of width $2\epsilon$ and area $1$. The computation gives $c_0=1$ and $$c_k={\sin(2\pi k\epsilon)\over 2\pi k\epsilon}\qquad(k\ne 0)$$ which tends to $1$ with $\epsilon\to0$. This means that "in the limit" we have $$u(x,t)=\sum_{k\in{\mathbb Z}} \exp(-4\pi^2 k^2 t)e^{2\pi i k x}\qquad(t>0)\ .$$ Putting $x=0$ here gives $$U(t)=\sum_{k\in{\mathbb Z}} \exp(-4\pi^2 k^2 t)\qquad(t>0)\ .\qquad{\rm (b)}$$ If you believe that (a) and (b) are the same thing then you have the stated formula with $4\pi t$ instead of $x$.
Solution 4:
Biane, Pitman and Yor describe one classical analytic approach that admits a probabilistic interpretation: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.17.7395
(Not sure if the link is behind a paywall. Just google the title if so).