Looking for examples of Discrete / Continuous complementary approaches

Among many fascinating sides of mathematics, there is one that I praise, especially for didactic purposes : the parallels that can be drawn between some "Continuous" and "Discrete" concepts.

I am looking for examples bringing a help to a global understanding...

Disclaimer : Being driven, as said above, mainly by didactic purposes, I am not in need for full rigor here although I do not deny at all the interest of having a rigorous approach in other contexts where it can be essential to show in particular in which sense the continuous "object" is the limit of its discrete counterparts.

I should appreciate if some colleagues can give examples of their own, in the style "my favorite one is...", or references to works about this theme.

Let me provide, on my side, five examples:


1st example: How to obtain the equations of certain epicycloids, here a nephroid :

Consider a $N$-sided regular polygon $A_1,A_2,\cdots A_N$ with any integer $N$ large enough, say around $50$. Let us connect every point $A_k$ to point $A_{3k}$ by a line segment (we assume a cyclic numbering). As can be seen on Fig. 1, a certain envelope curve is "suggested".

Question : which (smooth) curve is behind this construction ?

Answer : Let us consider two consecutive line segments like those represented on Fig. 1 with a larger width : the evolution speed of $A_{3k} \to A_{3k'}$ where $k'=k+1$ is three times the evolution speed of $A_{k} \to A_{k'}$, the pivoting of the line segment takes place at the point (of the line segment) which is 3 times closer to $A_k$ than to $A_{3k}$ (the weights' ratio 3:1 comes from the size ratio of ''homothetic'' triangles $P_kA_kA_k'$ and $P_kA_{3k}A_{3k'}$.) Said in an algebraic way :

$$P_k=\tfrac{3}{4}e^{ika}+\tfrac{1}{4}e^{3ika}$$

($A_k$ is identified with $e^{ika}$ with $a:=\tfrac{2 \pi}{N}$).

Replacing now discrete values $ka$ by a continuous parameter $t$, we get

$$z=\tfrac{3}{4}e^{it}+\tfrac{1}{4}e^{3it}$$

i.e., a parametric representation of the nephroid, or the equivalent real equations :

$$\begin{cases}x=\tfrac{3}{4}\cos(t)+\tfrac{1}{4}\cos(3t)\\ y=\tfrac{3}{4}\sin(t)+\tfrac{1}{4}\sin(3t)\end{cases}$$

enter image description here

Fig. 1 : The nephroid as an envelope. It can be viewed as the trajectory of a point of a small circle with radius $\dfrac14$ rolling inside a circle with radius $1$.

Remark: if, instead of connecting $A_k$ to $A_{3k}$, we had connected it to $A_{2k}$, we would have obtained a cardioid, with $A_{4k}$ an astroid, etc.


2nd example: Coupling ''second derivative $ \ \leftrightarrow \ \min \ $ kernel'' :

All functions considered here are at least $C^2$, but function $K$.

Let $f:[0,1] \rightarrow \mathbb{R}$ and $K:[0,1]\times[0,1]\rightarrow \mathbb{R}$ (a so-called "kernel") defined by $K(x,y):=\min(x,y)$.

Let us associate $f$ with function $\varphi(f)=g$ defined by $$\tag{1}g(y)=\int_{t=0}^{t=1} K(t,y)f(t)dt=\int_{t=0}^{t=1} \min(t,y)f(t)dt$$

We can get rid of "$\min$" function by decomposing the integral into :

$$\tag{2}g(y)=\int_{t=0}^{t=y} t f(t)dt+\int_{t=y}^{t=1} y f(t)dt$$

$$\tag{3}g(y)=\int_{t=0}^{t=y} t f(t)dt - y F(y)$$

where we have set

$$\tag{4}F(y):=\int_{t=1}^{t=y}f(t)dt \ \ \ \ \ \ \ \ \text{Remark:} \ \ \ F'(y)=f(y)$$

Let us differentiate (4) twice :

$$\tag{5}g'(y)=y f(y) - 1 F(y) - y f(y) = -F(y)$$

$$\tag{6}g''(y)= -f(y) \ \ \Longleftrightarrow \ \ f(y)=-g''(y)$$

Said otherwise, the inverse of transform $f \rightarrow \varphi(f)=g$ is:

$$\tag{7}\varphi^{-1} = \text{opposite of the second derivative.}$$

This connexion with the second derivative is rather unexpected...

Had we taken a discrete approach, what would have been found ?

The discrete equivalents of $\varphi$ and $\varphi^{-1}$ are matrices :

$$\bf{M}=\begin{pmatrix}1&1&1&\cdots&\cdots&1\\1&2&2&\cdot&\cdots&2\\1&2&3&\cdots&\cdots&3\\\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\1&2&3&\cdots&\cdots&n \end{pmatrix} \ \ \textbf{and}$$ $$\bf{D}=\begin{pmatrix}2&-1&&&&\\-1&2&-1&&&\\&-1&2&-1&&\\&&\ddots&\ddots&\ddots&\\&&&-1&2&-1\\&&&&-1&1 \end{pmatrix}$$

that verify matrix identity: $\bf{M}^{-1}=\bf{D}$ in analogy with (7).

Indeed,

  • Nothing to say about the connection of matrix $\bf{M}$ with coefficients $\bf{M}_{i,j}=min(i,j)$ with operator $K$.

  • tridiagonal matrix $\bf{D}$ is well known (in particular by all people doing discretization) to be "the" discrete analog of the second derivative due to the classical approximation:

$$f''(x)\approx\dfrac{1}{2h^2}(f(x-h)-2f(x)+f(x+h))$$

that can easily be obtained using Taylor expansions. The exceptional value $1$ at the bottom right of $\bf{D}$ is explained by discrete boundary conditions.

Remark: this correspondence between "min" operator and second derivative is not mine ; I known for a long time but I am unable to trace back where I saw it at first (hopefully in a signal processing book). If somebody has a reference ?

Connected : the eigenvalues of $D$ are remarkable (http://www.math.nthu.edu.tw/~amen/2005/040903-7.pdf)

In the same vein : computation of adjoint operators.


3rd example : the Itô integral.

One could think that the Lebesgue integral (1902) is the ultimate theory of integration, correcting the imperfections of the theory elaborated by Riemann some 50 years before. This is not the case. In particular, Itô has defined (1942) a new kind of integral which is now essential in probability and finance. Its principle, roughly said, is that infinitesimal "deterministic" increments "dt" are replaced by random increments of brownian motion type "dW" as formalized by Einstein (1905), then by Wiener (1923). Let us give an image of it.

Let us first recall definitions of brownian motion $W(t)$ or $W_t$, ($W$ for Wiener), an informal one, and a formal one:

Informal : A "particle" starting at $x=0$ at time $t$, jumps "at the next instant" $t+dt$, to a nearby position; either on the left or on the right, the amplitude and sign of the jump being governed by a normal distribution $N(x,\sigma^2)$ with an infinitesimal fixed standard deviation $\sigma.$

$\text{Formal}: \ \ W_t:=G_0 t+\sqrt{2}\sum_{n=1}^{\infty}G_n\dfrac{\sin(\pi n t)}{\pi n}$, with $G_n$ iid $N(0,1)$ random variables.

(Other definitions exist. This one, under the form of a "random Fourier series" is handy for many computations).

Let us now consider one of the fundamental formulas of Itô's integral, for a continuously differentiable function $f$:

$$\tag{8}\begin{equation} \displaystyle\int_0^t f(W(s))dW(s) = \displaystyle\int_0^{W(t)} f(\lambda)d \lambda - \frac{1}{2}\displaystyle\int_0^t f'(W(s))ds. \end{equation}$$

Remark: The integral sign on the LHS of (8) defines Itô's integral, whereas the integrals on the RHS have to be understood in the sense of Riemann/Lebesgue. The presence of the second term on the RHS is rather puzzling, isnt'it ?

Question: how can be understood/justified this second integral ?

Szabados has proposed (1990) (see (https://mathoverflow.net/questions/16163/discrete-version-of-itos-lemma)) a discrete analog of formula (8). Here is how it runs:

Theorem: Let $f:\mathbb{Z} \longrightarrow \mathbb{R}$. let us define :

$$ \tag{9}\begin{equation} F(k)=\left\{ \begin{matrix} \dfrac{1}{2}f(0)+\displaystyle\sum_{j=1}^{k-1} f(j)+\dfrac{1}{2}f(k) & if & k \geq 1 & \ \ (a)\\ 0 & if & k = 0 & \ \ (b)\\ -\dfrac{1}{2}f(k)-\displaystyle\sum_{j=k+1}^{-1} f(j)-\dfrac{1}{2}f(0) & if & k \leq -1 & \ \ (c) \end{matrix} \right. \end{equation} $$

Remarks:

  1. We will work only on (a) and its particular case (b).

  2. (a) is nothing else than the "trapezoid formula" explaining in particular factors $\dfrac{1}{2}$ in front of $f(0)$ et $f(k)$.

Let us now define a family of Random Variables $X_k$, $k=1, 2, \cdots $, iid on $\{-1,1\}$ with $P(X_k=-1)=P(X_k=1)=\frac{1}{2}$, and let

$$ \begin{equation} S_n= \displaystyle\sum_{k=1}^n X_k. \end{equation} $$

Then

$$ \tag{10}\begin{equation} \forall n, \ \ \displaystyle\sum_{i=0}^{n}f(S_i)X_{i+1} = F(S_{n+1})-\dfrac{1}{2}\displaystyle\sum_{i=0}^{n}\dfrac{f(S_{i+1})-f(S_{i})}{X_{i+1}} \end{equation} $$

Remark : Please note analogies :

  • between $\frac{f(S_{i+1})-f(S_{i})}{X_{i+1}}$ and $f'(S_i)$.

  • between $F(k)$ and $\displaystyle\int_{\lambda=0}^{\lambda=k}f(\lambda)d\lambda$.

For example,

a) If $f$ is identity function ($\forall k \ f(k)=k$), definition (9)(a) gives : $$ \begin{equation} F(k)=\frac{1}{2}(k-1)k+\frac{1}{2}k=\dfrac{1}{2}k^2. \tag{11} \end{equation} $$

which doesn't come as a surprise : the 'discrete antiderivative' of $k$ is $\frac{1}{2}k^2$... (the formula in (11) remains in fact the same for $k<0$).

b) If $f$ is the "squaring function" ($\forall k, \ f(k)=k^2$), (9)(a) becomes :

$$ \begin{equation} \text{If} \ k>0, \ \ \ F(k)=\frac{1}{6}(k-1)k(2k-1)+\frac{1}{2}k^2=\dfrac{1}{3}k^3+\dfrac{1}{6}k. \tag{12} \end{equation} $$

This time, a new term $\dfrac{1}{6}k$ has entered into the play.

Proof of the Theorem: The definition allows to write :

\begin{equation}F(S_{i+1})-F(S_i)=f(S_i)X_{i+1}+\frac{1}{2}\dfrac{f(S_{i+1})-f(S_i)}{X_{i+1}} \tag{13} \end{equation}

In fact, proving (11) can be split into two cases: either $X_{i+1}=1$, or $X_{i+1}=-1$. Let us consider the first case (the second case is similar): the RHS of (13) becomes

$f(S_i)+\frac{1}{2}(f(S_{i+1})-f(S_i))=\frac{1}{2}(f(S_{i+1})+f(S_i))$ which is the area variation in the trapezoid formula ;

Summing all equations in (10) gives the desired identity.

An example of application : Let $f(t)=t$ ; we get

$$\displaystyle\sum_{i=0}^{n}S_iX_{i+1} = F(S_{n+1})-\frac{n}{2}=\dfrac{1}{2}S_{n+1}^2-\frac{n}{2}.$$

which appears as the discrete equivalent of the celebrated formula: $$ \begin{equation} \displaystyle\int_0^t W(s)dW(s) = \frac{1}{2}W(t)^2-\frac{1}{2}t. \end{equation} $$

One can establish the autocorrelation of $W_t$ process is

$$cov(W_s,W_t)=E(W_sW_t)-E(W_s)E(W_t)=\min(s,t),$$

(see (Autocorrelation of a Wiener Process proof)) providing an unexpected connection with the second example...

Last remark: Another kind of integral based on a discrete definition : the gauge integral (https://math.vanderbilt.edu/schectex/ccc/gauge/).


4th example (Darboux sums) :

Here is a discrete formula :

$$\prod_{k=1}^{n-1}\sin\frac{k \pi}{n} = \frac{n}{2^{n-1}}$$

(see a proof in Prove that $\prod_{k=1}^{n-1}\sin\frac{k \pi}{n} = \frac{n}{2^{n-1}}$)

Has this formula a continuous "counterpart" ?

Taking the logarithm on both sides, and dividing by $n$, we get :

$$\tfrac1n \sum_{k=1}^n \ln \sin \tfrac{k \pi}{n}=\tfrac{\ln(n)}{n}-\ln(2)\tfrac{n-1}{n}$$

Letting now $n \to \infty$, we obtain the rather classical integral :

$$\int_0^1 \ln(\sin(\pi x))dx=-\ln(2)$$


5th example : bivariate cdfs (cumulative probability density functions).

Let $(X,Y)$ a pair of Random Variables with pdf $f_{X,Y}$ and cdf :

$$F_{X,Y}(x,y):=P(X \leq x \ \& \ Y \leq y).$$

Take a look at this formula :

$$P(x_1<X \leq x_2, \ \ y_1<Y \leq y_2)=F_{XY}(x_2,y_2)-F_{XY}(x_1,y_2)-F_{XY}(x_2,y_1)+F_{XY}(x_1,y_1)\tag{14}$$

(https://www.probabilitycourse.com/chapter5/5_2_2_joint_cdf.php)

It is the discrete equivalent of the continuous definition of $F_{XY}$ as the mixed second order partial derivative of $F_{X,Y}$, under the assumption that $F$ is a $C^2$ function :

$$f_{XY}(x,y)=\dfrac{\partial^2 F_{X,Y}}{\partial x \partial y}(x,y).\tag{15}$$

Do you see why ? Hint : make $x_2 \to x_1$ and make $y_2 \to y_1$ and assimilate the LHS of (14) with $f(x_1,y_1)dxdy$.

Final remarks :

  1. A remarkable text about this analogy in Physics : https://www.lptmc.jussieu.fr/user/lesne/MSCS-Lesne.pdf

  2. In linear algebra, continuous analogs of some fundamental factorizations (https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.2014.0585).

  3. A similar question on MathOverflow mentionning in particular the following well written book "Computing the continuous discretely" by by Beck and Robins.

  4. There are many other tracks, e.g., connections with graphs "Discrete and continuous : two sides of the same." by L. Lovàsz or this one (http://jimhuang.org/CDNDSP.pdf), discrete vs. continuous versions of the logistic equation (https://math.stackexchange.com/q/3328867), etc.

  5. In the epidemiology domain: "Discrete versus continuous-time models of malaria infections" lecture notes by Lucy Crooks, ETH Zürich.

  6. Another example in probability: the connection between a discrete and a continuous distribution, i.e., the Poisson($\lambda$) distribution and the $\Gamma(n)$ distribution which is well treated in [this answer] (https://math.stackexchange.com/q/2228023).


My favorite one is about the discrete analogue of the wave equation. We all know how to solve the wave equation $$ u_{tt}=\alpha^2u_{xx},\quad u(0,t)=u(1,t)=0,\quad u(x,0)=f(x),\,u_t(x,0)=g(x) $$ with the separation of variables. However rigorously it requires the notion of Fourier series, convergence and the fact that the corresponding Sturm--Liouville problems produces a basis. One can instead consider a system of masses, two of which (the first and the last one) are fixed, and the rest are connected with springs. Then Newton's and Hook's laws imply $$ m\ddot u_{j}=c(u_{j+1}-2u_j+u_{j-1}),\quad j=1,\ldots,k-1, u_0(t)=u_k(t)=0, $$ where $u_j(t)$ is the displacement of the j-th mass at time $t$. We also have two initial conditions for each mass. Of course this system can be solved by using standard methods to analyze linear systems. But one can also look for the solution in the form $$ u_j(t)=T(t)J(j) $$ and end up with a boundary value problem for the second order difference operator. Which will produce a finite number of solutions, using which one can build the full solution to the problem. Of course, using correct limit with $k\to\infty$ one can rigorously obtain the wave equation.


I like the following problem:

Decompose a positive integer $N$ into positive integer addends summing to $N$ with maximal product.

In the "discrete" version of this problem, you find that the answer is almost to use all $3$s; but, not every positive integer is a sum of $3$s, so you might need some $2$s to finish it off. Unsurprisingly, then, the "continuous" analog of a problem solved discretely by mostly-$3$s-with-a-few-$2$s involves $e$.

The benefit of $3$ over $2$ is easy to see by decomposing $6$, for which $3 \cdot 3 = 9 > 8 = 2 \cdot 2 \cdot 2$.

For a couple of references to this particular problem, see:

Krause, E. F. (1996). Maximizing the product of summands; minimizing the sum of factors. Mathematics Magazine, 69(4), 270-278.

Wasserman, N. H. (2014). A rationale for irrationals: An unintended exploration of e. Mathematics Teacher, 107(7), 500-507.

For a student-friendly version, see the link on PlayWithYourMath:

enter image description here