Is this a dirac delta function?

I had this on an exam yesterday, and I'm not entirely convinced that the statement is true. We were asked to show that the function $\delta (x) = \int_{-∞}^{∞} \frac{1}{t(t-x)} dt$ is a dirac delta function by demonstrating that $I=\int_{-∞}^{∞} f(x)\delta(x) dx$ holds all the necessary properties.

There are three things I believe should be shown: 1) The function should be infinite at a single point. (this function is infinite at $t=x$) 2) It should be zero everywhere else 3) It should satisfy $\int_{-∞}^{∞} f(x)\delta(x) dx=f(0)$.

I showed 2 is true by demonstrating that the Cauchy Principal Value is zero for that integral which means that it's zero everywhere save that one point we avoid.... but I don't see how 3 holds in general. I see that it holds for some functions, like $f(x)=x$, but what about $f(x)=1$, for example. Anyway, is this a delta function or not.... if so, why?


Solution 1:

The question as posted is not well defined (due to multiple reasons). The fact that Dirac's $\delta$ is not a function but rather a distribution was pointed out before. What is also (maybe even more) relevant is that there is no prescription given how to handle the divergency due to the poles at $t=0$ and $t=x$. In fact, the question can be read as: is $I$ defined via $$I = \int_{-∞}^{∞} \frac{1}{(t+ i \epsilon_1) (t-x+ i \epsilon_2)} dt$$ where $\epsilon_1,\epsilon_2 \to 0$ the $\delta$-distribution. It turns out that the answer to this question depends on the fact whether $\epsilon_1$ (and $\epsilon_2$) approach 0 from above or from below. That this is important is exemplified by Sokhotski's formula.

To get more insight into the problem, we can apply the partial fraction expansion $$\frac{1}{(t+ i \epsilon_1) (t-x+ i \epsilon_2)} = \frac{1}{x+ i(\epsilon_1-\epsilon_2) } \left(\frac{1}{t-x+i\epsilon_2}- \frac{1}{t+ i \epsilon_1} \right).$$

If we perform the integral over $t$ (which is now possible as we have regularized the integral by the small imaginary parts), we obtain $$\int_{-\infty}^\infty \left(\frac{1}{t-x+i\epsilon_2}- \frac{1}{t+ i \epsilon_1} \right)dt = \begin{cases} 2\pi i & \epsilon_1 >0 , \epsilon_2 <0,\\ -2\pi i & \epsilon_1 <0 , \epsilon_2 >0,\\ 0 & \text{else}. \end{cases}$$ We see that if $\epsilon_1$ has the same sign as $\epsilon_2$, $I$ cannot be the $\delta$-distribution as it vanishes identically.

For $\epsilon_1 \epsilon_2 <0$, we have $$I = \frac{2\pi i \mathop{\rm sgn}(\epsilon_1) }{x+ i(\epsilon_1-\epsilon_2) } = 2\pi i \mathop{\rm sgn}(\epsilon_1) \left( \mathcal{P} \frac{1}{x} + i \pi \mathop{\rm sgn}(\epsilon_2-\epsilon_1) \delta(x) \right)$$ via Sokhotki's formula.

We see that the distribution defined by $I$ depends very sensitively on the signs and magnitude of $\epsilon_1$ and $\epsilon_2$. And in non of the many different limits it is in fact a simple $\delta$-distribution!

Solution 2:

This is a charming example! To be able to really prove something, it is best to be more circumspect about what the "function" given by the integral "is", since, as noted in comments, it's not a function in a classical (=late 19th century post-Cauchy-Weierstrass) sense, although it is of course very productive to think of it as a generalized (=post-L.Schwartz) function. There are choices. As in the first comment by the OP, we could take $\delta_\epsilon(x)=\int_{-\infty}^\infty {dt\over (t+i\epsilon)(t+i\epsilon +x)}$, and say that the integral with $\epsilon=0$ is _some_kind_of_ limit as $\epsilon\rightarrow 0^+$. In particular, it is a weak limit, meaning that we only require that limits $\lim_{\epsilon\rightarrow 0^+}\int_{-\infty}^\infty f(x)\,\delta_{\epsilon}(x)\,dx\;$ exist for nice-enough functions $f$.

Indeed, for $f(0)=0\;$, this limit certainly exists and is $0$. Assuming that the limit exists in general, this already proves that the limit must be a scalar multiple of $\delta$.

To determine the scalar, and/or to see that the limit exists for all Schwartz functions (for example), the usual trick is to take a specific, convenient $f_o$ with $f_o(0)=1$, and first note that for general $f$ we have $\delta_\epsilon(f)=\delta_\epsilon(f-f(0)\cdot f_o)+f(0)\cdot \delta_\epsilon(f_o)\;$. Thus, it suffices to evaluate $\delta_\epsilon(f_o)$ explicitly. Taking $f_o(x)=e^{-x^2}$ succeeds here...