What is the product of a Dirac delta function with itself? [closed]

What is the product of a Dirac delta function with itself? What is the dot product with itself?


A distribution is actually a linear functional on the space of compactly supported infinitely differentiable functions (the so called "test functions"). A function $f$ is compactly supported if $\overline{\{x : f(x) \neq 0\}}$ is compact (the overline denotes the closure).

The $\delta$-distribution is a linear functional such that for all $\phi \in C_c^\infty(\mathbb{R}^n)$ we have that $\langle \delta, \phi \rangle = \phi(0)$.

When you want to compute the product of distributions the problem is that you don't have a property which you would really like to have, that is associativity. So for distributions $\alpha$, $\beta$ and $\gamma$ we usually have that $(\alpha \cdot \beta) \cdot \gamma \neq \alpha \cdot (\beta \cdot \gamma)$. Wikipedia gives an example. However this does not really turn out to be a problem in applications. What we do have is convolution.

When we want to do convolution we prefer a smaller class of distributions (for example because on the smaller class the Fourier transform of a distribution in this class is again a distribution in this class). This actually has a rougher class of test-functions, as test functions here we take the Schwartz functions, that are the smooth functions of which the function itself and all its derivatives are rapidly decreasing. $f$ is said to be rapidly decreasing if there are constants $M_n$ such that $|f(x)| \leq M_N |x|^{-N}$ as $x \to \infty$ for $N = 1,2,3,\ldots$.

To begin defining the convolution we first define what the convolution of Schwartz-function is with a tempered distribution. Let $f$ be our tempered distribution, then we can show that the following definition actually makes sense: $$\langle \phi * f, \psi \rangle := \langle f, \tilde{\phi} * \psi \rangle$$ where $\tilde{\phi}(x) = \phi(-x)$. Note that the RHS is well-defined. Convolution is a nice thing, we can see that if we start with a tempered distribution and convolute it with a test function, the result will be smooth. Now, $L_1 * L_2$ is the unique distribution $L$ with the property that $L * \phi = L_1 * (L_2 * \phi)$. We can show that this is commutative.

Fine, now note that $\delta * \phi(x) = \phi(x - y)|_{y = 0} = \phi(x)$. So we see that $\delta * \delta = \delta$.

If you want me to comment on the dot product of distributions, you first would have to explain what you mean with that.

So far for this short digression on distributions.

EDIT: Okay, you want to compute $\delta^2$. Let $\phi_n$ be an approximation to the identity and let it converge to $\delta$ in the sense of distributions, but $\phi_n^2$ does not converge at all since the integral against a test function that does not vanish at the origin blows up as $n \to \infty$.


Here is a heuristic to suggest that it will be difficult to define the square of the delta function.

The Fourier transform has the property that it takes the convolution of two functions to the product of their Fourier transforms, and vice versa, ie, it takes the product of two functions to their convolution.

Remember that the Fourier transform of the delta function is the constant function($=1$). Now suppose that $\delta^2$ exists. Then its Fourier transform would be the convolution of two constant functions. Such a convolution would shoot to infinity at every point. Even the theory of distributions can't handle such kind of stuff. So how would you imagine the inverse Fourier transform of this? How would it make sense?


This answer is primarily to expand on this comment. From that comment and the following, it seems to me draks thinks of the delta as a function, and from the title, it seems the OP also does. Or at least, this was true at the time of posting the comment and the question respectively. Eradicate this misconception from your minds at once, if it is there. The delta is not a function, although it is sometimes called "Delta function".

Let me give you a bit of background, a little timeline of my relationship with the Delta.

  1. I first heard of it from my father, a Physics professor and physicist, who introduced it to me as a function equalling 0 outside 0 and infinity in 0. Such a function seemed abstruse to me, but I had other worries up my mind so I didn't bother investigating. This is how Dirac originally thought of the Delta when introducing it, but, as we shall see, this definition is useless because it doesn't yield the one most used identity involving this "function";
  2. Then I had Measure theory, and voilà a Dirac Delta again, this time a measure, which gives a set measure 0 if 0 is not in it, and 1 if 0 is in. More precisely, $\delta_0$ is a measure on $\mathbb{R}$, and if $A\subseteq\mathbb{R}$, then $\delta_0(A)=0$ if $0\not\in A$, and 1 otherwise. Actually, I was introduced to uncountably many Deltas, one for each $x\in\mathbb{R}$. $\delta_x$, for $x\in\mathbb{R}$, was a measure on the real line, giving measure 0 to a set $A\subseteq\mathbb{R}$ with $x\not\in A$, and 1 to a set containing $x$;
  3. Then I had Physics 2 and Quantum Mechanics, and this Delta popped up as a function, and I was like, WTF! It's a measure, not a function! Both courses did say it was a distribution, and not a function, so I was like, what in the world is a distribution? But both courses, when using it, always treated it like a function;
  4. Then I had Mathematical Physics, including a part of Distribution theory, and I finally was like, oh OK, that is what a distribution is! The measure and the distribution are close relatives, since the distribution is nothing but the integral with respect to the measure of the function this distribution is given as an argument.

In both settings, it is a priori meaningless to multiply two deltas. Well, one could make a product measure, but that would just be another delta on a Cartesian product, no need for special attention. In the distribution setting, we have what this answer says, which gives us an answer as to what the product might be defined as, and what problems we might run into.

So what is the product of deltas? And what is the comment's statement all about?

The answer to the first question is: there is no product of deltas. Or rather, to multiply distributions you need convolutions, and those need some restrictions to be associative.

The second question can be answered as follows. That statement is a formal abbreviations. You will typically use that inside a double integral like: $$\int_{\mathbb{R}}f(\xi)\int_{\mathbb{R}}\delta(\xi-x)\delta(x-\eta)dxd\xi,$$ which with the formal statement reduces to $f(\eta)$. I have seen such integrals in Quantum Mechanics, IIRC. I remember some kind of spectral theorem for some kind of operators where there was a part of the spectrum, the discrete spectrum, which yielded an orthonormal system of eigenvectors, and the continuous spectrum somehow yielded deltas, but I will come back here to clarify after searching what I have of those lessons for details.

Edit: $\newcommand{\braket}[1]{\left|#1\right\rangle} \newcommand{\xbraket}[1]{|#1\rangle}$ I have sifted a bit, and found the following:

Spectral theorem Given a self-adjoint operator $A$, the set of eigenvectors $\braket{n}$ of $A$ can be completed with a family of distributions $\braket{a}$, indicised by a continuous parameter $a$, which satisfy: \begin{align*} A\braket{n}={}&a_n\braket{n} && \braket{n}\in H, \\ A\braket{a}={}&a\braket{a} && \braket{a}\text{ distribution}, \end{align*} in such a way as to form a "generalized" basis of $H$, in the sense that all the vectors of $H$ can be written as an infinite linear combination: $$\braket{\psi}=\sum c_n\braket{n}+\int da\,c(a)\braket{a}.$$ The set of eigenvalues (proper and generalized) of $A$ is called the spectrum of $A$ and is a subset of $\mathbb{R}$.

What happens to the Parseval identity? Naturally: $$\langle\psi,\psi\rangle=\sum|c_n|^2+\int da\,|c(a)|^2.$$ So this "basis" is orthonormal in the sense that the eigenvectors are, the distributions have as product a family of deltas, or: $$\langle a,a'\rangle=\delta(a-a'),$$ and multiplying the eigenvectors by the distributions also yields a nice big 0.

The famous identity I mentioned in the timeline above and then forgot to expand upon is actually what defines the delta, or at least what the QM teacher used to define it: $$\int_{\mathbb{R}}f(x)\delta(x-x_0)=f(x_0),$$ for any function $f:\mathbb{R}\to\mathbb{R}$ and $x_0\in\mathbb{R}$. If the $\delta$ were a function, it would have to be zero outside 0, but I'm sure you know all too well that altering the value of a function in a single point doesn't alter the integral, and the integral in the identity above would be an integral of a function that is 0 save for a point, so it would be 0, and if $f(x_0)\neq0$ the identity wouldn't hold.

Notice how this formal statement is much like an analogous statement for Kronecker deltas: $$\sum_n\delta_{nm}\delta_{nl}=\delta_{ml}.$$ Imagine taking this to the continuum: the sums become integrals, and what can $\delta_{nm}$ become if not $\delta(n-m)$? So the statement is just a formal analog of the true statement with Kronecker Deltas when going into the continuum. Of course, distributionally it makes no sense, nor in terms of measure.

I have no idea how integrals with two deltas may be useful, and I have found none in my sifting. I will sift more, and perhaps Google, and if I find anything interesting, I'll be back.

Update: $\newcommand{\lbar}{\overline} \newcommand{\pa}[1]{\left(#1\right)}$ I decided I'd just stop the sifting and concentrate on my exams. I googled though, and found this.

Another argument I thought up myself in favor of the statement is the following. Let $\phi$ be a functions. It is pretty natural to say: $$\phi=\int_{\mathbb{R}}\phi(a)\delta(x-a)da,$$ since for any $x$ this yields $\phi(x)$. Now what happens to the $L^2$-norm? $$N:=\|\phi\|_{L^2}^2=\int_{\mathbb{R}}\lbar{\phi(x)}\phi(x)dx=\int_{\mathbb{R}}\lbar{\int_{\mathbb{R}}\phi(a')\delta(x-a')da'}\cdot\pa{\int_{\mathbb{R}}\phi(a)\delta(x-a)da}dx.$$ The complex conjugation can be brought inside the first integral. Now to a physicist integrals that don't swap are evil, and we surely don't want any evil around, so we assume we can reorder the three integrals the way we want, and get: $$N=\int_{\mathbb{R}}da\,\phi(a)\cdot\pa{\int_{\mathbb{R}}da'\,\lbar{\phi(a')}\cdot\pa{\int_{\mathbb{R}}dx\,\delta(x-a)\delta(x-a')}}.$$ Suppose the formal statement holds. Then the innermost integral yields $\delta(a-a')$, and the second innermost one yields $\lbar{\phi(a)}$, which then combines with $\phi(a)$ outside it to form $|\phi(a)|^2$, which integrated gives the $L^2$ norm of $\phi$, squared. If the statement doesn't hold, it seems unreasonable to think we can still get the squared norm out of that mess. So the statement must hold, otherwise the integrals won't swap.


I think the answer provided by Jonas is correct, but Jonas assumed "product" meant convolution as this is normally the only binary operation that usually make sense with the Dirac delta function.

So if by product you mean convolution then: δ∗δ=δ

If by "product" you meant point-wise multiplication, then the answer is: Undefined.

The usual approach is to treat δ as the limit of some nascent delta function. See Delta Function for examples of these nascent delta functions. So if you multiply two nascent δ functions and then take the limit, the result will vary depending on the particular pair of nascent δ functions selected. In most cases the integral of the point-wise product of two nascent δ functions is zero as the limit is taken, in some cases the integral is 1, in other cases the value of the integral tends to infinity as the limit of nascent δ is taken.

For example If you multiply the rectangle nascent δ with the nascent δ that is a triangle pulse, then integral of the resulting cubic tends to zero as the interval tends to 0. If you multiply the sinc version of the nascent δ with itself, then the integral is equal to the frequency of the sinc function. In this case the integral tends to infinity as the frequency of the sinc function tends to infinity.

Because of this, the original answer given (The question makes no sense) is the best because the term "product" in its common meaning (e.g. point-wise multiplication of two functions) leads to contradictory results for the limit process. Using the term "product" when you mean convolution (the only operation that produces meaningful results) is at best confusing.