Why use a particular regularization for $\int_0^\infty \mathrm{d}x\,e^{i p x}$?

There are many badly defined integrals in physics. I want to discuss one of them which I see very often. $$\int_0^\infty \mathrm{d}x\,e^{i p x}$$ I have seen this integral in many physical problems. Many people(physicists) seem to think it is a well defined integral, and is calculated as follows:

We will use regularization (we introduce a small real parameter $\varepsilon$ and after calculation set $\varepsilon = 0$.

$$I_0=\int_0^\infty \mathrm{d}x\,e^{i p x}e^{ -\varepsilon x}=\frac{1}{\varepsilon-i p}=\frac{i}{p}$$

But I can obtain an arbitrary value for this integral! I will use regularization too, but I will use another parametrization:

$$I(\alpha)=\int_0^\infty \mathrm{d}x\,e^{i p x}=\int_0^\infty dx \left(1+\alpha\frac{\varepsilon \sin px}{p}\right)e^{i p x}e^{ -\varepsilon x}$$ where $\varepsilon$ is a regularization parameter and $\alpha$ is an arbitrary value using $\int_0^\infty \mathrm{d}x\,\sin{(\alpha x)} e^{ -\beta x}=\frac{\alpha}{\alpha^2+\beta^2}$

After a not-so-difficult calculation I obtain that $I(\alpha)=\frac{i}{p}\left(1+\frac{\alpha}{2}\right)$.

This integral I have often seen in intermediate calculation. But usually people do not take into account of this problem, and just use $I_0$. I don't understand why?

I know only one example when I can explain why we should use $I_0$. In field theory when we calculate $U(-\infty,0)$, where $U$ is an evolution operator, It is proportional to $\int^0_{-\infty} \mathrm{d}t\,e^{ -iE t}$. It is necessary for the Weizsaecker-Williams approximation in QED, or the DGLAP equation in QCD, because in axiomatic QFT we set $T\to \infty(1-i\varepsilon)$.

My question is: Why, in calculation of the integral $\int_0^\infty \mathrm{d}x\,e^{i p x}$, do people use $I_0$? Why people use $e^{ -\varepsilon x}$ regularization function? In my point of view this regularization no better and no worse than another.


Solution 1:

The first integral isn't really about adding an arbitrary regulator in the form of exponential damping $e^{-\epsilon x}$, as we might see in something like $$\sum_n n =\lim_{\epsilon\rightarrow 0} \sum_n n e^{-\epsilon n}=\lim_{\epsilon\rightarrow 0}\frac{1}{4 \cosh^2{\epsilon/2}}=\lim_{\epsilon\rightarrow 0}\left(\frac{1}{\epsilon^2}-\frac{1}{12}+O(\epsilon^2)\right)\stackrel{magic}{\sim} -\frac{1}{12}.$$

On the contrary, the point is not choosing something arbitrary which has the correct limit, but solving a more general problem by letting $p \to \tilde p = p + i \epsilon \in\mathbb C $. The result is then perfectly valid for $\text{Im}(\tilde p)=\epsilon>0$, and the limit is often written as $$I_0 = \frac{i}{p+i 0},$$ which is the form you can often see in QFT. The "$+i0$" has great physical significance, as we'll show a bit later.

Methods similar to what OP did with $I(\alpha)$ would work, if they produced consistent answers. But OP showed that even though the $\epsilon\rightarrow 0$ limit is the same, the answer isn't. With $I_0$ however, we don't have to introduce anything new, we just promote $p$ to a complex variable, and solve the same integral we originally had. I hope this explains why this particular integral "works". I don't claim that there aren't many other dubious integrals in physics though!

Now for a bit of a lengthy digression about why we want to have $i0$ terms in denominators and why it makes sense physically, which you can freely skip if you're satisfied with my answer to question so far.

Let's use the Green's function to solve the forced harmonic oscillator equation. Just to recall, if we have a linear ODE (or PDE) in the form $$\mathcal L_x[y]=f(x).$$ Then we can try writing the solution as a convolution with the forcing: $y(x)=\int \mathrm d x'G(x,x')f(x')$, and inserting this into the differential equation we get $$f(x)=\mathcal L_x[y]=\mathcal L[\int \mathrm d x' G(x,x')f(x')]=\int \mathrm d x'\mathcal L_x[ G(x,x')]f(x'),$$ which means that $L_x[ G(x,x')] = \delta(x-x').$ In most cases $G(x,x')=G(x-x')$, and for the harmonic oscillator, we have $$\partial_{t}^2G(t-t') + \omega_0^2 G(t-t')=\delta(t-t'),$$ which we solve via Fourier transform. We get $$G(t-t')=-\frac{1}{2\pi}\int_{\mathbb R} d\omega \frac{e^{i\omega(t-t')}}{\omega^2-\omega_0^2}.$$

This doesn't converge due to the poles on the real axis. The solution is adding $+i0$ to the poles and shifting them above the real axis. Now when $t>t'$ we must close the contour above the real axis, where we have the 2 residues and everything is great. But when $t<t'$, we have to close it on the lower half plane, and the result is zero since the integrand is analytic there. And in general adding $+i0$ restricts the Green's function to $G(t,t')\to G(t,t')\theta(t-t')$, which is a statement about causality, since then we have $$y(t)=\int_{-\infty}^{\infty}\mathrm d t G(t,t')\theta(t-t') f(t')=\int_{-\infty}^{t}\mathrm d t G(t,t') f(t').$$

The same trick is used in classical electrodynamics to solve $(\nabla^2 -1/c^2 \partial_t^2)\phi = -\frac{1}{\epsilon_0}\rho$ and ensures causal propagation at the speed of light. Almost every propagator (ie Green's function) in QFT has $\pm i0$ - though these terms may be omitted in writing, they are there. In the harmonic oscillator case, the result is basically the same as adding damping to the original problem, which is physically better, since if we have $f(t)=f_0 e^{i\Omega t}$, $y(t)$ will explode if we have resonance $\omega_0=\Omega$ - this is exactly what happens in the Fourier integral, where we're basically integrating over a spectral decomposition of $f(t)$, which also contains the 'resonant' one. The main point about $i0$ is the causality, of course.

Solution 2:

Cross-posting my answer to the copy of this question on Physics SE for visibility: Something that fixes $\frac{i}{p}$ uniquely is that, independently of regulator, it is the constant term of the asymptotic expansion of $$\int_0^b \mathrm{e}^{ipx} \,\mathrm{d}x = \frac{i - \mathrm{e}^{ipb}}{p}$$

Moreover, consider applying your regulator with $\alpha \neq 0$ to a case where the integral does converge, such as $p=i$. Does it still yield the correct answer? Naively, one might think the second term in the integrand doesn't matter because $\varepsilon \rightarrow 0$ so $\frac{\varepsilon \sin p x}{p} \rightarrow 0$. The same would be true in this convergent scenario. But it clearly does matter because $\alpha \neq 0$ yields the wrong answer. Thus you must let $\alpha \rightarrow 0$ at the end, or your method will be inconsistent with convergent integrals.