The Dirac Delta Function: How do you prove the $f(0)$ property using rigorous mathematics?

Dirac, Reference 1, pg 59, says

The most important property of $\delta(x)$ is exemplified by the following equation

\begin{equation*} \int_{-\infty}^\infty f(x) \delta(x) dx =f(0),~~~~~~~~~~~~~~~~~~~(3) \end{equation*}

What you might consider, as really going on here, is that the left hand side is a notation for a limiting process. We could replace the above with \begin{equation*} \lim_{\epsilon\to 0^+} \int_{-\infty}^\infty \delta_\epsilon(x) f(x) dx =f(0) \end{equation*} Where the variable $\epsilon$ is used by Dirac, see pg 58 of Reference 1, if you can

Using a variable '$t$' , rather than $\epsilon$, the real meaning of (3) might be taken to be represented by \begin{equation*} \lim_{t\to 0^+} \int_{-\infty}^\infty \delta_t(x) f(x) dx =f(0) \end{equation*} where, $$ \delta_t(x) = \left\{ \begin {array}{lr} \frac{1 }{2t } & \text{if} -t \leq x \leq t \\ 0 & \text{if } ~~~~~~|x|>t\\ \end{array} \right . $$

$ \delta_t(x) $ could be called a Top-Hat function ( t for Top-Hat) because of the shape of it's graph.

Define the function $G(t)$ by \begin{equation*} G(t)= \int_{-\infty}^\infty \delta_t(x) f(x) dx \end{equation*}

The idea of a limit of a function at a point is a "two-sided thing". For a number,'l', to be the limit of a function $G(t)$, at some point $t_0$, we have to be able to solve, for any positive $\epsilon$, however small

\begin{equation*} |G(t)-l|<\epsilon \end{equation*}

for't' values in some union of open intervals,

\begin{equation*} t \in (t_0-\delta,t_0) \cup (t_0,t_0+\delta) \end{equation*}

where in general the positively valued $\delta$ is a function of $\epsilon$.

In our case, there will exist, at best, a "one-sided limit".

What we need to show, put $t_0=0$, is that \begin{equation*} \lim_{t\to 0^+} \int_{-\infty}^\infty \delta_t(x) f(x) dx =f(0) \end{equation*} or

\begin{equation*} \lim_{t\to 0^+}G(t)=f(0) \end{equation*}

My question is, how do we show \begin{equation*} \lim_{t\to 0^+}G(t)=f(0) \end{equation*} using rigorous mathematics.

You might answer this question for complex valued functions $f(x)$, or restrict your answer to real-valued $f(x)$.

Reference

  1. P.A.M. Dirac, The Principles Of Quantum Mechanics 4th Ed., Clarendon Press, Oxford, 1958

Solution 1:

My answer is restricted to the case of real-valued functions $f(x)$.

To prove

\begin{equation*} \lim_{t\to 0^+}G(t)=f(0) \end{equation*}

We need to show, that for any pre-chosen positive $\epsilon$, however small, we can find a set of values of '$t$', that satisfy the inequality \begin{equation*} |G(t)-f(0)|<\epsilon \end{equation*} these values of $t$ should form an open interval of the form $(0,\delta)$. Where in general $\delta$ is a function of $\epsilon$.

NB: f(x) is a continuous function.

Hence, for any pre-chosen positive $\epsilon$, however small, we can find what we might call a '$\tilde{\delta}$ neighborhood of $x_0$', $\tilde{\delta}_N$, at each $x_0$ in $f's$ domain

\begin{equation*} \tilde{\delta}_N=(x_0-\tilde{\delta}, x_0+ \tilde{\delta}) \end{equation*} For any $x\in\tilde{\delta}_N$, we have

\begin{equation*} |f(x)-f(x_0)|<\epsilon \end{equation*}

Put $x_0=0$. For any allowed $\epsilon$, our chosen $\epsilon$ determines a number $\tilde{\delta}$, associated with the continuity of $f$, we then consider $G(\tilde{\delta})$

\begin{align*} G(\tilde{\delta})&= \int_{-\infty}^\infty \delta_{\tilde{\delta} }(x) f(x) dx\\ &= \int_{-\tilde{\delta} }^{\tilde{\delta}} \delta_{\tilde{\delta} }(x) f(x) dx \end{align*} because,

\begin{equation*} \delta_{\tilde{\delta} }(x)=0~ \text{if}~ |x|>\tilde{\delta} \end{equation*}

The above integral, by $f's$ continuity is bounded as below ( for real-valued $f(x)$ only )

\begin{equation*} f(0)-\epsilon \leq \int_{-\tilde{\delta} }^{\tilde{\delta}} \delta_{\tilde{\delta} }(x) f(x) dx\leq f(0)+\epsilon \end{equation*} Or, put another way, \begin{equation*} f(0)-\epsilon \leq G(\tilde{\delta}) \leq f(0)+\epsilon \end{equation*}

Hence, \begin{equation*} |G( \tilde{\delta} )-f(0)|<\epsilon \end{equation*}

Now, for $0<t<\tilde{\delta}$, we must have \begin{equation*} |G( t )-f(0)|<\epsilon \end{equation*}

i.e. We have shown, that for any positive $\epsilon$, however small, we can find a set of values of 't' that satisfy the inequality \begin{equation*} |G(t)-f(0)|<\epsilon \end{equation*} and, that this set of values of t, has the form of an open interval, $(0, \tilde{\delta})$.

Hence,

\begin{equation*} \lim_{t\to 0^+}G(t)=f(0) \end{equation*}

Or, put another way, \begin{equation*} \lim_{t\to 0^+} \int_{-\infty}^\infty \delta_t(x) f(x) dx =f(0) \end{equation*}

So, if we accept, that the above is the real meaning of \begin{equation*} \int_{-\infty}^\infty \delta(x) f(x) dx =f(0) \end{equation*} then, we have proved the f(0) property, for real-valued f(x), of the Dirac delta function, using rigorous mathematics.

Solution 2:

We have $$ G(t) = \int_{-\infty}^\infty \delta_t(x) f(x) \, dx %= \int_{-\infty}^\infty \frac{1}{2t}\mathbf{1}_{[-t,t]}(x) f(x) \, dx %= \int_{-t}^{t} \frac{1}{2t} f(x) \, dx = \frac{1}{2t} \int_{-t}^{t} f(x) \, dx . $$ Changing variable by $x=ty$ and assuming that $f$ is continuous at $x=0$ so that $f(tx) \to f(0)$ as $t\to 0$ we get $$ G(t) %= \frac{1}{2t} \int_{-1}^{1} f(ty) \, t \, dy = \frac{1}{2} \int_{-1}^{1} f(ty) \, dy \to \frac{1}{2} \int_{-1}^{1} f(0) \, dy = f(0) . $$


In the rigorous theory, distributions like $\delta$ are defined as linear functionals over a space of nice "test functions" that are infinitely differentiable and vanishing quickly when $|x|\to\infty$. The action of a distribution $u$ on a test function $\phi$ is often denoted by $\langle u, \phi \rangle.$ This can be thought of as $\int_{-\infty}^{\infty} u(x) \, \phi(x) \, dx.$ Operations on distributions are then often defined by moving the operation to the test function. For example, derivation is defined by $\langle u', \phi \rangle := -\langle u, \phi' \rangle,$ based on integration by parts. In this theory, $\delta$ is defined by $\langle \delta, \phi \rangle = \phi(0).$ You can get an introduction to the theory in Lecture notes on Distributions by Hasse Carlsson.