Is any real-valued function in physics somehow continuous?

Consider the following well-known function: $$ \operatorname{sinc}(x) = \begin{cases} \sin(x)/x & \text{for } x \ne 0 \\ 1 & \text{for } x =0 \end{cases} $$ In physics, the sinc function has applications with for example spectrography. Mathematically speaking, there would be no objection against an alternative like this: $$ \operatorname{suck}(x) = \begin{cases} \sin(x)/x & \text{for } x \ne 0 \\ 0 & \text{for } x =0 \end{cases} $$ But in physics such an alternative proposal would be void of applications. It is silently assumed that $\operatorname{sinc}(x)$ is continuous at $x=0$. Physicists do not even think about a discontinuous alternative.

The sinc function is only an example of a far more general claim, uttered by one of my heroes, the great mathematician L.E.J. Brouwer. It is (not very well) known as Brouwer's Continuity Theorem, grossly stating that every real-valued function is continuous. More precisely, as quoted from Strong Counterexamples: "In intuitionistic mathematics, the Brouwer Continuity Theorem states that all total real functions are (uniformly) continuous on the unit interval".

Real valued, physical quantities have uncertainties. That is one of the fundamental properties of physics. And it isn't just due to quantum considerations. Take an average metal bar. It has no exact length. There are for example temperature fluctuations (atoms in motion) which will cause the bar's length to fluctuate. This effectively means that any real number in physics is accompanied with an uncertainty, an error, often denoted as $\delta$ or $\varepsilon$.

Consider the classical mathematical definition of continuity of a function. All numbers are assumed to be real-valued. A function $f(x)$ is said to be continuous at $x=a$ if and only if for all $\varepsilon > 0$ there exists a $\delta > 0$ such that if $|x-a| < \delta$ then $|f(x)-f(a)| < \varepsilon$, where it may be that $x \ne a$.

A physical interpretation of this might be formulated as follows: an error in a continuous function can be made as small as desired by adapting the error in the function's argument accordingly. Due to the errors, $|x-a|<\delta$ is physically the same as $x\approx a$ ($x$ equals $a$ approximately) and this can be said for $f(x)$ and $f(a)$ as well. So we can even write $\; x\approx a \,\Longrightarrow\, f(x)\approx f(a)\;$ , as a (sloppy) definition of continuity. The latter formulation is even closer to Brouwer's Continuity Theorem, if we replace the $\,\approx\,$ by a common equal sign: $\; x=a \,\Longrightarrow\, f(x)=f(a)\;$ , expressing the idea that a function is continuous where it really is .. a function!

Now consider again the above suck function. Whatever small it might be, inevitably there is an error in the argument, meaning that $x=0$ should actually be replaced by an interval $|x-0| < \delta$. There are values $x\ne 0$ in that interval, though, and $\,\lim_{x\to 0} \operatorname{suck}(x) = 1$. Hence, physically speaking, $\operatorname{suck}(0) = 1\,$ and $\operatorname{suck}(0) = 0\,$ must be true at the same time. Which is impossible. IMHO this is the reason why $\operatorname{suck}(0) = 1\,$ is involved automatically in physics, resulting inevitably in our old friend the sinc function and nothing else.

I'm well aware of the fact that this way of physical reasoning does not involve all sorts of continuity that mathematicians might think of. So the question is what sorts of continuity are sensitive to the automatism that is present in the sinc function and what sorts of continuity are distinct from this. It's a somewhat vague question, but I am a humble physicist by education and I do not know of any better way to formulate it.

EDIT. A far more simple example of a function with the same sort of "automatism" as with the $\operatorname{sinc}$ function is given by: $$ f(x) = \begin{cases} (x^2-1)/(x-1) & \text{for } x \ne 1 \\ 2 & \text{for } x=1 \end{cases} $$ Which is physically the same as $\,f(x) = x+1$ . A counter example is the function $\,g(x) = 1/x$ , much like the one given by snulty . So it seems that some singularities are "essential" (physically speaking) while others are not. Can someone be more specific? Because I find it a can of worms, as is exemplified by related Q&A in MSE and elsewhere:

  • Cauchy distribution instead of Coulomb law?
  • Could this be called Renormalization?
  • Does this limit exist and if so what is it's value?
  • Can monsters of real analysis be tamed in this way?
  • Computability, Continuity and Constructivism
  • Delta function that obeys inverse square law
    outside its (-1; 1) range and has no 1/0 infinity
  • Critical Mass Flow

Solution 1:

The canonical example of this is the apparent singularity that arises in spherical coordinates when you pass around the earth only to find that your longitude has gone from $0$ to $180$. Or for example the singularity that arises in the Laplacian with spherical coordinates. These are all non-physical and are a consequence of choosing a coordinate system.

A prime example of this kind of thing comes up in general relativity where you'll see singularities in your metric. In many situations these singularities are actually artifacts of the coordinate system chosen, see here for example: https://physics.stackexchange.com/questions/223549/coordinate-singularity-in-metric

here's a wonderful read on this subject: "What is a Singularity - Geroch" and the takeaway quote:

The presence or absence of a coordinate singularity is not a property of the spacetime itself, but rather of the physicist who has chosen the coordinates by which the spacetime is described.

However sometimes these singularities point at failings of a given theory, like ultraviolet catastrophes for example. See here. In particular if a singularity exists in all coordinate systems (i.e is diffeomorphism invariant), only then can we conclude that perhaps this is a failing of our current theory. The point is that nature should not care about our choice of coordinate system.

Solution 2:

This is a cleaned up version of some comments of mine on the original question.

Some bad behavior is removable, some is not. The behavior that is removable in some sense "already is", from the physicists' perspective. For example, sinc can be thought of as (a multiple of) the Fourier transform of the indicator function of some interval symmetric about zero. This is really only uniquely determined up to a.e. equivalence, so we may freely choose our "favorite" representative to be "the function".

To put it another way, we can identify the "physical version" of a function $f$ as $\frac{d}{dx} \int_a^x f(y) dy$. This is what you would get by averaging $f$ over smaller and smaller intervals containing $x$. It can happen that this limit doesn't exist. In this case $f$ "really does" have a singularity, and that must be addressed somehow for $f$ to have any physical meaning.

Because singularities almost never really exist in nature, usually the answer is that there is some gap between the model and the reality. For example, it could be that the "physical" equation has a "regularizing" term with a small coefficient that you are neglecting. In this case or similar cases, the "real" function might not have any actual discontinuity but there could be a scale separation at the position that your equation predicts a discontinuity. Understanding this scale separation, even if only through a somewhat unrealistic model, is useful.

Solution 3:

Discontinuous functions are fairly common.

What's the magnitude of the force between two point charges, or particles which can be considered point charges,

$$F=\frac{kq_1 q_2}{r^2}$$

Where $q$'s are the charges, $k$ is constant, and $r$ is the distance between them.

This is quite clearly discontinuous when the distance is is zero, and diverges as the particles get arbitrary close to each other. Same thing happens for Newtonian gravity.

In fact this causes a problem when taking a Fourier transform of the potential, which is usually 'fixed' by either physically giving the photon a mass and letting the mass go to zero afterward, or just admitting we can't know with arbitrary precision the exact form of the potential, our experiments aren't good enough, so they add an extra $e^{-ar}$ into the potential and let $a\to 0$ after the calculation.

The other thing that happens is when you want to thing about point sources, so physicists introduce the Dirac delta 'function', which I believe is more properly described as a distribution, can be obtained as the discontinuous limit of continuous functions.

Solution 4:

Being continuous or not is often a matter of referential: origin, orientation of axes and units, or scale.

When discontinuity appears, the source is often in the simplification of the model, think about reducing a moving object to its center of mass.

But would physics exists without discontinuity? If an equation is continuous, and differentiable, it seems to me that derivative discontinuities often arise with a sufficient order. Is that because the time could be discrete?