Are there "tight" (as possible) upper bounds for $\max_t{\left\{ \left|\frac{d f(t)}{dt}\right|\right\}}$ for time-limited functions???

This may not be a proper answer to your questions, but I hope it lends you some intuition that the bound $\sup_t|df/dt| \le \int |\omega\hat f(\omega)|\,d\omega$, while tight in the sense that there is equality for $t = 0$, can be a "loose" bound for many values of $t$, in a quantitative sense. This answer relates to your question when you consider time-limited functions (say to $(-\pi,\pi)$ to be concrete) extended periodically on the whole line.

Consider a much simpler "model". Let $f(t) = \sum_{n=-N}^N a_ne^{int}$, where the $a_n$ are complex numbers, let's say all satisfying $|a_n| = 1$ to start. Then we can ask about how close to being an equality the following inequality is: $$ |\sum_{n=-N}^N a_ne^{int}| \le \sum_{n=-N}^N|a_ne^{int}| = 2N+1.\tag{1} $$ For $t = 0$, we have equality, but there could be a lot of cancellation making the inequality "loose" for other values of $t$. Let's consider some special cases to build intuition.

  1. $a_n = 1$ for every $n$. In this case, we can sum $f(t)$ as a geometric series to see $$f(t) = \frac{\sin((N+1/2)t)}{\sin(t/2)}.$$ The function $f(t)$ is periodic with period $2\pi$. It is a calculation that $$c\log N\le \int_{-\pi}^\pi |f(t)|\,dt\le C\log N,$$ which we can interpret as meaning that for an "average" point $t\in (-\pi,\pi)$, $|f(t)|\approx \log N$, meaning that there is actually lots of cancellation happening, and the inequality $(1)$ is quite loose on average.

  2. Example 1 was a very special case (the function $f(t)$ in that example is known as the $N$th Dirichlet kernel, so let's consider something more general. Suppose that the $a_n$ are independent random signs, meaning $\mathrm{Prob}(a_n = 1) = \mathrm{Prob}(a_n = -1) = 1/2$. Then $f(t)$ is a random trigonometric sum, and is something of a model for a "generic" or "arbitrary" function. By Khintchine's inequality, $$\mathbb E|f(t)| \le C(2N+1)^{1/2},$$ which says that on average, $|f(t)|$ is smaller than the bound $2N+1$ by a square-root, again implying $(1)$ is a very loose bound. Heuristically, $f(t)$ is very similar to a "random walk", and this inequality is an expression of the well-known "root-mean square displacement" of random walk.

In general, quantifying the extent to which an inequality like $(1)$ is tight or not is a difficult question, but for a "random" or "generic" function, we often expect to improve upon $(1)$ (in the sense that we expect we can replace the right-hand side $2N+1$ with something smaller), as Example 2 might suggest.

As a side-note, I suspect that asking about the tightness of the inequality $$\sup_t|df/dt| \le \int|\omega\hat f(\omega)|\,d\omega \tag{1'}$$ is equivalent to asking about the tightness of the inequality $$\sup_t|g(t)|\le \int|\hat g(\omega)|\,d\omega\tag{2}$$ considering that $df/dt$ and $\omega\hat f(\omega)$ are related (up to unital complex numbers) by the Fourier transform, so $(1')$ is a special case of $(2)$ when we set $g(t) = df/dt$.


This is the second part of the question, with an introduction and motivation about the topic - I have added a new answer with actual results I have found here.

Here the author of the question, extending it because I ran out of space.

The Fourier series of a periodic function $x(t)$ with period $T$ is given by: $$\begin{array}{r c l} sx(t) & = & \sum\limits_{k=-\infty}^{\infty} a_k \, e^{j \omega t},\, \omega = k\,\omega_0,\, \omega_0 = \frac{2\pi}{T} \\ \text{with}\,\,\,a_k & = & \int\limits_T\,x(t)\,e^{-j \omega t}\, dt \\ \end{array}$$ When starting to solve the problem, I really started from here: What is the worst possible “basic scenario" of infinite slew rate, or “jumps”, extendable to any other situation? I believe is the rectangular function with an arbitrary amplitude, since any weird function could in the limit be made by "infinitely-thin steps” (here, transformed in delta functions), and any slew rate possible will be obtained by changing its height. But for using the Fourier Series, instead of working with the rectangular function, I will work with the symmetric square-wave, which it first period wave is defined by (following the notation of Chapter 4 of [1]): $$x(t) = \begin{cases} A\,,\,\text{if}\,\,\,0 \leq |t| \leq T_1 \\ 0\,,\,\text{if}\,\,T_1 < |t| \leq T \end{cases} $$ For this signal, the Fourier coefficients are given by: $$\begin{array}{r c l} a_k & = & \frac{2A}{T}\cdot\frac{\sin(\omega T_1)}{\omega} \\ \Rightarrow sx(t) & = & \frac{2A}{T}\sum\limits_{k=-\infty}^{\infty} \frac{\sin(\omega T_1)}{\omega}\cdot e^{j \omega t} \\ \end{array}$$ Now, for studying the maximum possible rate of change, I will truncate the series of the square-wave to its $|N|$ component ($N>0$), and take its derivative with respect to $t$: $$\begin{array}{r c l} sx_N(t) & = & \frac{2A}{T} \sum\limits_{k=-N}^{N} \frac{\sin(\omega T_1)}{\omega}\cdot e^{j \omega t} \\ \Rightarrow y_N(t) = \frac{d}{dt}sx_N(t) & = & j\frac{2A}{T} \sum\limits_{k=-N}^{N} \sin(\omega\,T_1)\, e^{j \omega t} \\ \end{array}$$ Finding the maximum rate of change is equivalent to study $max_t\,|y_N(t)|$, and for this, we can expand $y_N(t)$ using that $\sin(x) = \frac{1}{2j}(e^{jx}-e^{-jx})$: $$\begin{array}{r c l} y_N(t) & = & \frac{A}{T} \sum\limits_{k=-N}^{N} e^{j \omega t}\cdot \left( e^{j \omega T_1} - e^{-j \omega T_1} \right)\\ & = & \frac{A}{T}\cdot \left( \underbrace{\sum\limits_{k=-N}^{N} e^{j \omega (t+T_1)}}_{\text{Dirichlet Kernel}} -\underbrace{\sum\limits_{k=-N}^{N} e^{j \omega (t-T_1)}}_{\text{Dirichlet Kernel}} \right)\\ & = & \frac{A}{T}\cdot \left( \frac{\sin\left((2N+1)\cdot\frac{\omega_0}{2}\cdot(t+T_1)\right)}{\sin\left( \frac{\omega_0}{2}\cdot(t+T_1)\right)}-\frac{\sin\left((2N+1)\cdot\frac{\omega_0}{2}\cdot(t-T_1)\right)}{\sin\left( \frac{\omega_0}{2}\cdot(t-T_1)\right)}\right)\,\,\,\,\texttt{(Eq. 9)} \\ \end{array}$$ This function $\sin((2N+1)\,x)/\sin(x)$, named "Dirichlet Kernel" [15], is an even periodic function which principal period "looks alike" a "high" frequency Sinc Function, with a main lobe that attain a maximum value of $(2N+1)$ [16]. Since both Dirichlet Kernels of Eq. 9 can't attain each $|\max|$ at the same time, I will work with the limit case at $t \to -T_1$: $$ \Rightarrow y_N^*(t) = \frac{A}{T}\cdot \left( 2N+1-\frac{\sin\left((2N+1)\cdot\omega_0 T_1\right)}{\sin\left( \omega_0 T_1\right)}\right) $$ Since the remaining term is negative, the maximum possible value of $y_N^*(t)$ will be attained at the minimum value of $\sin((2N+1)\,x)/\sin(x)$: graphically, it can be seen that the minimum is attained on the first negative lobes, which moves over the curve $-1/\sin(x)$ when changing $N$, then, making an equality for the first right negative lobe ($x>0$): $$ \frac{\sin\left((2N+1)\,x\right)}{\sin(x)} = -\frac{1}{\sin(x)} \Rightarrow \sin\left((2N+1)\,x\right) = -1 \Rightarrow (2N+1)\,x = \frac{3\pi}{2} \Rightarrow x^*= \frac{3\pi}{2\,(2N+1)} $$ Now, using the "small angle approximation" $\sin(x) \approx x$, and replacing with $x^*$, I can make an approximation for the minimum value: $$ \min_{0<x<2\pi}\left\{\frac{\sin\left((2N+1)\,x\right)}{\sin(x)}\right\} = -\frac{1}{\sin(x)} \approx -\frac{1}{x^*} \approx - \frac{2\,(2N+1)}{3\pi} = y^*$$ where the effective minima "is a bit lower" than $y^*$. With this, I can make a lower bound for the maximum rate of change: $$\Rightarrow y_N^{LB}(t) = \frac{A}{T}\cdot\left(2N+1+\frac{2\,(2N+1)}{3\pi}\right)= \frac{A}{T}\cdot\left(1+\frac{2}{3\pi}\right)\cdot\left(2N+1\right) < \max_t |y_N(t)|$$ Here, if $N \to \infty$ then $\max_t |y_N(t)| \to \infty$, and this is why "infinite bandwidth" signals could achieve and infinite maximum rate of change.

Similarly, since the amplitude of the negative lobe of the Dirichlet kernel is always smaller than the main lobe amplitude, an upper bound for the maximum rate of change can be made as: $$\Rightarrow y_N^{UB}(t) = \frac{2A}{T}\cdot\left(2N+1\right) > \max_t |y_N(t)|$$ But from here, no matter how large $N$ is, if the signal is band-limited, then the signal will have a finite maximum rate of change. Now, thinking in what would happen if I suppress a "symmetric" band of frequencies (the positive and corresponding negative frequencies), starting at an arbitrary frequency $c$, and extracting a band of large $d$, then, using that: $$ \sum\limits_{k = c}^{d-c+1} b_k = \sum\limits_{k = 1}^{d-c+1} b_k -\sum\limits_{k = 1}^{c-1} b_k $$ The maximum rate of change of this "cropped signal" will be given by: $$ \begin{array}{r c l} y_N^\text{cut} (t) & = & \frac{A}{T}\cdot\left(1+\frac{1}{3\pi}\right)\cdot\left\{2N+1-\left[2\cdot(d-c+1)+1-\left(2\cdot(c-1)+1 \right)\right] \right\}\\ & = & \frac{A}{T}\cdot\left(1+\frac{1}{3\pi}\right)\cdot\left\{N+4c-2d-1\right\}\\ \end{array}$$ So, it doesn´t matter how many components I take out of the Fourier series, on the $N \to \infty\,$ scenario the maximum rate of change could be infinite, even if I choose $d \gg 0$ and the Fourier coefficients have already decay "near zero value" long ago (because of Riemann-Lebesgue Lemma), and I am adding just "almost-zero values" $\ll 0.1$, since they are still infinite many, it could add a constant (bounded maximum rate of change), or add infinite (unbounded maximum rate of change). This is why I believe these $\text{mysterious conditions}\,\mathbb{X}$ will be (totally) related to the decay of the Fourier spectrum.

Unfortunately as an engineer I don’t have the mathematical toolbox to analyze functions decays, and until now, neither the smartness to found a “good” upper bound for the maximum rate of change of time-limited functions which bounded slew rate, neither approximations of it: already tried using $A_0 = \int_{-\infty}^{\infty} |F(j\omega)|dw \rightarrow A_0\cdot \int_{-\infty}^{\infty} |\omega \frac{F(j\omega)}{A_0}|dw = A_0 \cdot E_F[\omega]$ a weighted expected value and didn´t found any bound, also trying to think the exponential and polynomial parts of $\int_{-\infty}^{\infty} j\omega\, e^{j\omega t} F(j\omega)\,dw$ as a Gamma function through change of variables unsuccessfully, and right now I am stuck trying to found an approximation through the Stationary-Phase method here.

After two month and one and a half copybook of dead ends I am dry of ideas, but it was really interesting to found that known properties were popping-up through my attempt to find a solution, and I hope that beginners could interest on these questions and learn as much I did writing it, about integral norms, the intuition of the Total Variation, compactly-supported and Bumps functions, different definitions of the Fourier transform with finite integration limits, etc., so please share it with your teachers or department coworkers to see if they get involved in founding a solution.

And for which are mathematicians and physicist, maybe the question seems obvious. But believe me if it isn´t for engineers, and realizing that really smart people have been run after this question before, I believe that given the successful results of Weierstrass founding continuous functions nowhere differentiable, the Brownian motion which haves infinite Total Variation, fractals, the Fabius Function which is differentiable but nowhere analytic, the success of Topology that explains things been as general as possible, and the previous result that says that unlimited bandwidth signals could have infinite maximum rate of change, have move mathematicians and scientists out from studying these more specific signals for which being time-limited functions, also have bounded slew rate, and founding these $\text{mysterious conditions}\,\mathbb{X}$ could lead to find bounds that could be useful for engineers (as exists for bandlimited functions, or maybe through optimization), and even more, if physical phenomena could be described under these conditions, then a physical law have been also being found (to keep physics discussions out from here I left the physical motivation and possible applications on this question): like the property that says that “if a function is continuous and compact-supported, then is bounded” [16], it will be awesome to find that if a function follow $\text{mysterious conditions}\,\mathbb{X}$ then its maximum rate of change is bounded by $\text{(insert bound here)}$.

Or conversely, given my limited mathematical knowledge and since I prove myself that time-limited functions with bounded maximum slew rate do exists (it wasn´t obvious for me), at least for me, to assume that "because infinite bandwidth signals could achieve infinite max slew rate" $\Rightarrow$ "there is no conditions where under them, time-limited functions could be achieving bounded max slew rates" (so examples are just “happy coincidences”), will be falling into a logical fallacy (I believe is named Hasty generalization). So, if you could prove that these $\text{mysterious conditions}\,\mathbb{X}$ are nonexistence, it will be also great, since I could be moving out from trying to solve this problem.

I hope you can join me to work on this question, so if you are taking this seriously, please share with me from which department you are to start following your results. And if you believe that nothing can be done, I can tell you that already and incredible smart person have proved (I think), that at least for one-variable real compact-supported functions which have $f(t_0) = f(t_F) \neq 0$, the upper bound of Eq. 2 will always diverge (you can check this here), and unbelievable it was done in half an hour!!... so, I am very hopeful that interesting results can be achieved with your help. For reaching so far, thanks you very much.


This all is idealistic math. Do such a search approximation quality of a measured function under fourier transformation on the web to gather ideas what really are the problems of the FT in real-world applications. So the step function can not be FTed and there is overswing and others. Nice FT can do wolframalpha easier than this community.

Generalization from the unit step-function is really hard and done by the various authors to put that all together is more expensive than a bounty. FT has the two approaches infinite of finite interval and the variants continuous or discrete spectrum. A continuous spectrum is more flexible for the function to transform but the discrete spectrum is more real in measurements. The parameters of FT problem sets the length of the interval, sampling rate and the curvature change of the function under measurement are subject to many multivariate - multiscaling multidimensional analyses since all internal and external conditions may change during measurement and FT.

Answers in the scope of this community shall focus on questions that address subranges of the full problem sets.


Here the author of the question with part 3 (part 1 in question, part 2 in previous answer), finally with some answers.

Trying to find some alternative bounds from Eq. 2, by a "lucky mistake" working with the Cauchy-Schwarz Inequality, I found some bounds that improves the result, at least for the same functions of Table 2 for which Eq. 2 gives finite results... but caution with them, because I don´t really know why they work since are obtain with an "illegal" approach and as I will explain later, maybe they would not work for every kind of functions.... How I get them is shown in detail in this another question, here I will only list them and show its results: $$\begin{array}{c} \frac{ \sqrt{\pi} }{4\pi} \left| \sqrt{ \int_{-\infty}^\infty \left| |\omega | (1+4\omega^2) F^2(j\omega) \right| d\omega } \right| \,\,\,\texttt{(Eq. 10)} \\ \frac{ \sqrt{\pi} }{4\pi} \left| \sqrt{ \int_{-\infty}^\infty \left| |\omega | (1+j4 \omega^2) F^2(j\omega) \right| d\omega } \right| \,\,\,\texttt{(Eq. 11)} \\ \frac{ \sqrt{2\pi} }{4\pi} \left| \sqrt{ \int_{-\infty}^\infty \left| \omega^2 (1+2\omega) F^2(j\omega) \right| d\omega } \right| \,\,\,\texttt{(Eq. 12)} \end{array}$$ For the test functions of Table 2, the bound of Eq. 12 shown to be the tighter. But some show finite results even when the function has unbounded slew rate, so as a method, I first check if the following bound is finite, and then apply the bounds of Eq. 10 - Eq. 12: $$ \frac{1}{2\pi}\cdot \frac{4}{5}\Gamma\left(\frac{1}{5}\right)\Gamma\left(\frac{4}{5}\right) \cdot \sup\limits_\omega \left|F(j\omega)\,(1+|\omega |^{2.5})\right|\,\,\,\texttt{Eq. 13}$$ Bound of Eq. 13 was obtained through Hölder's Inequality as an improved version of the bound of Eq. 8, so it will be higher than Eq. 2. Also note that the exponent was chosen “experimentally” by working with $f(t) = \cos^2(t\pi/2),\,|t|\leq 1$, so as I explained in the other question, they could been easily improved if you can work with Meijer's G functions. Now the table of results: $$ \begin{array}{|c:c|c:c|c|c:c:c:c:c|} \hline f(t) & \text{dom}(f) = [a\,;\,b] & \mathbb{F}_{[a\,;\,b]}\{f(t)\}(\omega) & \text{dom}(F(j\omega)) & \max_{a < t < b} |f'(t)| & \frac{1}{2 \pi} \int_{-\infty}^{\infty} |j\omega F(j\omega)|d\omega & Eq. 13 & Eq. 10 & Eq. 11 & Eq. 12 \\ \hline \cos^2(\frac{t\pi}{2}) & [-1; 1] & \frac{\pi^2\sin(\omega)}{(\pi^2\omega-\omega^3)} & (-\infty; \infty) & \frac{\pi}{2} = 1.57079 & 2.28547^* & 6.8973 & 1.9069 & 1.8733 & 1.8708 \\ \hdashline \frac{(1+\cos(t\pi))^2}{4} & [-1; 1] & \frac{3\,\pi^4\sin(\omega)}{(\omega^5-5\pi^2\omega^3+4\pi^4\omega)} & (-\infty; \infty) & \frac{3\sqrt{3}\pi}{8} = 2.0405 & 2.61265^* & 9.9209 & 2.4135 & 2.3876 & 2.3864 \\ \hdashline \sin(\frac{t\pi}{2})\cos^2(\frac{t\pi}{2}) & [-1; 1] & j\frac{16\, \pi^2\, \omega \cos(\omega)}{(16\, \omega^4-40\, \pi^2 \omega^2+9\,\pi^4)} & (-\infty; \infty) & 1.5708 & 1.93647^* & 8.6398 & 1.8215 & 1.8077 & 1.8075 \\ \hdashline \text{sinc}(t\pi)\cos(\frac{t\pi}{2}) & [-1; 1] & \frac{1}{2\pi}\left(\text{Si}(\frac{\pi}{2}-\omega)+\text{Si}(\frac{3\pi}{2}-\omega)+\text{Si}(\frac{\pi}{2}+\omega)+\text{Si}(\frac{3\pi}{2}+\omega)\right) & (-\infty; \infty) & 1.62897 & \infty^* & 7.2904 & 1.9552 & 1.9229 & \textbf{1.9276} \\ \hdashline 1-\sin^4(\frac{t\pi}{2}) & [-1; 1] & \frac{\pi^2(5\pi^2 - 2\,\omega^2) \sin(\omega)}{(\omega^5 - 5\pi^2\omega^3 + 4\pi^4\omega)} & (-\infty; \infty) & \frac{3\sqrt{3}\pi}{8} = 2.0405 & 3.01547^* & 10.8628 & 2.2717 & 2.2362 & 2.2332 \\ \hdashline \sin(|t|\pi) & [-1; 1] & \frac{2\pi(\cos(\omega)+1)}{(\pi^2-\omega^2)}& (-\infty; \infty) & \pi^* (``\textit{jump}\,\textit{disc.}") & 426.324^* & \infty & 11.4467^* & 11.4362^* & 10.3742^* \\ \hline \end{array} $$ Now I Will review how to overpass the problem with the other functions of Table 2, because I have had another “lucky strike”. Since I can find the maximum rate of change of a time limited function $f(t) = x(t)\left(\theta(t-t_0)-\theta(t-t_F) \right)$ in the time domain avoiding the problem in its boundaries $\partial t = \{t_0,\,t_F\}$ by using: $$ \max_t \left| \frac{df(t)}{dt}\right| \approx \max_t \left| \frac{dx(t)}{dt}\cdot\left(\theta(t-t_0)-\theta(t-t_F) \right)\right|$$

I am going to study this figure. From now on, I will use $\Delta \theta \cong \left(\theta(t-t_0)-\theta(t-t_F) \right)$ as shorthand. Since the Dirac's delta function could be defined as $\delta(t) = \frac{d\theta(t)}{dt} = \theta'$, and the Sifting Property $x(t)\delta(t-a)=x(a)\delta(t-a)$, I will have that the following is true: $$\frac{df(t)}{dt} = \frac{dx(t)}{dt}\Delta\theta + x(t)\Delta\theta'= \frac{dx(t)}{dt}\Delta\theta + x(t)\Delta\delta = \frac{dx(t)}{dt}\Delta\theta + x(t_0)\delta(t-t_0)-x(t_F)\delta(t-t_F)$$ So, $$ \max_t \left| \frac{df(t)}{dt}\right| = \max_t \left| \frac{dx(t)}{dt}\Delta\theta + x(t_0)\delta(t-t_0)-x(t_F)\delta(t-t_F)\right|$$ Here two things can be noted: first, clearly the maximum rate of change of a time-limited function which have any value in its borders different from zero will diverge because of it, clearly answering “Yes” to my conjecture of the end of the question (1st part), and also, it says which “thing” I need to subtract to be working with the required term: $$\begin{array}{r c l} \max\limits_t\left| x'\Delta\theta\right| & = & \max\limits_t \left| \frac{df(t)}{dt} + x(t_F)\delta(t-t_F)-x(t_0)\delta(t-t_0)\right| \\ & = & \max\limits_t \left| \frac{1}{2\pi}\int\limits_{-\infty}^\infty \mathbb{F}_{[t_0,\,t_F]}\left\{\frac{df(t)}{dt} + x(t_F)\delta(t-t_F)-x(t_0)\delta(t-t_0)\right\}e^{j\omega t}d\omega \right| \\ & = & \max\limits_t \left| \frac{1}{2\pi}\int\limits_{-\infty}^\infty \left(j\omega F(j\omega) + x(t_F)\mathbb{F}_{[t_0,\,t_F]}\left\{\delta(t-t_F)\right\}-x(t_0)\mathbb{F}_{[t_0,\,t_F]}\left\{\delta(t-t_0)\right\}\right)e^{j\omega t}d\omega \right| \\ \end{array}$$ Unfortunately, the term $\mathbb{F}_{[t_0,\,t_F]}\left\{\delta(t-a)\right\}$ is not an easy defined one, as is shown on the answers of my question here, but for now on, I will use the following without a formal prove, but it has shown to work like magic: $$ \int\limits_{t_0}^{t_F}\delta(t-a)e^{-j\omega t}dt = e^{-j\omega a} \int\limits_{t_0}^{t_F} \delta(t)e^{-j\omega t}dt = e^{-j\omega a}$$ Please by now assume that is right, but for example, Wolfram-Alpha when evaluating it for $a<b$ gives: $$ \int\limits_{a}^{b}\delta(t)e^{-j\omega t}dt = \begin{cases} 1,\,\,\text{if}\,a<0<b \\ 0,\,\, a<b<0\,\vee \,0<a<b \end{cases} $$ With this, I will have that: $$\begin{array}{r c l} \max\limits_t\left| x'\Delta\theta\right| & = & \max\limits_t \left| \frac{1}{2\pi}\int\limits_{-\infty}^\infty \left( j\omega F(j\omega) + x(t_F)e^{-j\omega t_F}-x(t_0)e^{-j\omega t_0} \right)e^{j\omega t}d\omega \right|\,\,\,\texttt{(Eq. 14)}\\ & \leq & \max\limits_t \frac{1}{2\pi}\int\limits_{-\infty}^\infty \left| j\omega F(j\omega) + x(t_F)e^{-j\omega t_F}-x(t_0)e^{-j\omega t_0} \right| d\omega \\ & \overset{\text{indep. of}\,t}{=} & \frac{1}{2\pi}\int\limits_{-\infty}^\infty \left| j\omega F(j\omega) + x(t_F)e^{-j\omega t_F}-x(t_0)e^{-j\omega t_0} \right|d\omega \,\,\,\texttt{(Eq. 15)} \end{array}$$ Note that this new upper bound of Eq. 15 is going to be the same of Eq. 2 when used with the functions for whom Eq. 2 have already worked, since they fulfill that $x(t_0) = x(t_F) = 0$. Now, updating table 2 with this new bound you will see that works perfectly removing the effect of the discontinuity on the edges of the compact-support in time from the spectra on the frequency domain (in bold the new results, others for comparison): $$ \begin{array}{|c:c|c:c|c|c:c:c|c:c:c:c:c|} \hline f(t) & \text{dom}(f) = [a\,;\,b] & F(j\omega)=\mathbb{F}_{[a\,;\,b]}\{f(t)\}(\omega) & \text{dom}(F(j\omega)) & \max_{a < t < b} |x'\Delta\theta| & Eq. 15 & Eq. 20 \\ \hline \sqrt{1-t^2} & [-1; 1] & \pi \cdot \frac{J_1(\omega)}{\omega} & (-\infty; \infty) & \infty \,\, (|x'(t_0)|= |x'(t_F)|=\infty)& \infty^* & \infty \\ \hdashline \sin(\frac{t\pi}{2}) & [-1; 1] & -j\frac{8\,\omega\cos(\omega)}{(\pi^2-4\,\omega^2)} & (-\infty; \infty) & \frac{\pi}{2} = 1.57079 & \mathbf{1.8522}^* & \mathbf{1.2564}^*\,(< \max) \\ \hdashline \sin^2(\frac{t\pi}{2}) & [-1; 1] & \frac{(\pi^2-2\,\omega^2)\sin(\omega)}{(\pi^2\,\omega-\omega^3)} & (-\infty; \infty) & \frac{\pi}{2} = 1.57079 & \mathbf{2.28547}^* & 1.8702^* \\ \hdashline \cos^2(\frac{t\pi}{2}) & [0; 1] & j\frac{(\pi^2(1-e^{-j\omega})-2\,\omega^2)}{2\,\omega\,(\omega^2-\pi^2)} & (-\infty; \infty) & \frac{\pi}{2} = 1.57079 & \mathbf{1.85684}^* & \mathbf{1.2320}^*\,(< \max) \\ \hdashline \cos^2(\frac{t\pi}{2}) & [-1; 1] & \frac{\pi^2\sin(\omega)}{(\pi^2\omega-\omega^3)} & (-\infty; \infty) & \frac{\pi}{2} = 1.57079 & 2.28547^* & 1.87029^* \\ \hdashline \frac{(1+\cos(t\pi))^2}{4} & [-1; 1] & \frac{3\,\pi^4\sin(\omega)}{(\omega^5-5\pi^2\omega^3+4\pi^4\omega)} & (-\infty; \infty) & \frac{3\sqrt{3}\pi}{8} = 2.0405 & 2.61265^* & 2.3863^* \\ \hdashline \sin(\frac{t\pi}{2})\cos^2(\frac{t\pi}{2}) & [-1; 1] & j\frac{16\, \pi^2\, \omega \cos(\omega)}{(16\, \omega^4-40\, \pi^2 \omega^2+9\,\pi^4)} & (-\infty; \infty) & 1.5708 & 1.93647^* & 1.8080^*\\ \hdashline \text{sinc}(t\pi)\cos(\frac{t\pi}{2}) & [-1; 1] & \frac{1}{2\pi}\left(\text{Si}(\frac{\pi}{2}-\omega)+\text{Si}(\frac{3\pi}{2}-\omega)+\text{Si}(\frac{\pi}{2}+\omega)+\text{Si}(\frac{3\pi}{2}+\omega)\right) & (-\infty; \infty) & 1.62897 & \mathit{14.0197^*} &1.9271^* \\ \hdashline 1-\sin^4(\frac{t\pi}{2}) & [-1; 1] & \frac{\pi^2(5\pi^2 - 2\,\omega^2) \sin(\omega)}{(\omega^5 - 5\pi^2\omega^3 + 4\pi^4\omega)} & (-\infty; \infty) & \frac{3\sqrt{3}\pi}{8} = 2.0405 & 3.01547^* & 2.2302^* \\ \hdashline \sin(|t|\pi) & [-1; 1] & \frac{2\pi\,(1+\cos(\omega))}{(\pi^2-\omega^2)} & (-\infty; \infty) & \pi^* (``\textit{jump}\,\textit{disc.}") & \mathit{426.324^*} & \mathit{44.2918^*} \\ \hline \end{array} $$ Numbers with $(^*)$ were obtained through numerical integration by Nintegrate function in Wolfram-Alpha, and numbers in italic have different values in previous tables since recently I obtained their values through changing the parameter PrecisionGoal.

In the table can be seen that the new values for the previous divergent bounds are corresponding with the values of the other functions listed which behaves similarly within the compact-support, supporting the validity of this method so far.

But conversely, I already have tried to make upper bounds by the same procedure I got Eq. 10 - Eq. 12 with this modify spectra but unfortunately I have found results which are lower than the maximum rate of change, and this is Why I said that these bounds have to be used with Caution since its validity or the conditions under they work correctly haven´t been proved yet (and I will not do it since I don´t have the enough knowledge). Also, as example, they gives a finite value for $f(t) = \sin(|t|\pi)$ when its derivative has a “jump” discontinuity within its compact support. For an example of these Upper Bounds with the formula of Eq. 15, I also listed later one on the last table under Eq. 20: $$ \frac{ \sqrt{2\pi} }{4\pi} \sqrt{ \int_{-\infty}^\infty \left| \sqrt{1+2\omega}\cdot \left( j\omega\, F(j\omega) + x(t_F)e^{-j\omega t_F}-x(t_0)e^{-j\omega t_0} \right)\right|^2 d\omega } \,\,\,\texttt{(Eq. 20)} $$

At here, with this review, I can already see that at least any time-limited function for which Eq. 15 is finite, it will have a bounded maximum slew rate within its compact-support, thinking about the conditions for which unbounded bandwidth signals will have a limitation over the maximum rate of change they can achieve.

Now, the same procedure to obtain Eq. 14 could be applied to the second derivative: $$ \frac{d^2f(t)}{dt^2} = \frac{d^2x(t)}{dt^2}\Delta\theta + 2\frac{dx(t)}{dt}\Delta\theta +x(t)\Delta\theta''$$ Here, the only unknown term is $\Delta\theta'' = \Delta\delta'$, which has the same kind of issues that before for the integral of $\delta(t)$... on the comments of the same question, and also through some properties of Wikipedia (here the Spanish version, since these properties aren´t show directly on the English version, maybe because are not totally right under all possible interpretations of the Dirac's Delta function): $$\begin{array}{c} h(t)\delta'(t-a) = h(a)\delta'(t-a)-h'(a)\delta(t-a) \\ \left<\nabla \delta_a,\,\varphi \right> = -\nabla\varphi(a) \Rightarrow \left<\nabla \delta_a,\,e^{-j\omega t}\right> = \int\limits_{t_0}^{t_F} \delta(t-a)\,e^{-j\omega t}dt = -\frac{d}{dt}\left( e^{-j\omega t}\right)\Big|_{t=a} = j\omega\, e^{-j\omega a} \end{array}$$ Using these two properties as true, I can find that: $$ \frac{d^2f(t)}{dt^2} = \frac{d^2x(t)}{dt^2}\Delta\theta + e^{-j\omega t_0}\left(x'(t_0)+j \omega \, x(t_0) \right) -e^{-j\omega t_F}\left(x'(t_F)+j\omega \, x(t_F) \right)$$ $$ \Rightarrow \frac{d^2x(t)}{dt^2}\Delta\theta = \frac{d^2f(t)}{dt^2} + e^{-j\omega t_F}\left(x'(t_F)+j\omega \, x(t_F) \right)-e^{-j\omega t_0}\left(x'(t_0)+j\omega\, x(t_0) \right) \,\,\,\texttt{Eq. 16}$$ Which can be used to work with the 2nd derivative within the compact-support, avoiding the problems at its edges. With this, allowing the following abuse of notation: $$\begin{array}{c} f(t_0) = \lim\limits_{t \to t_0^+} f(t) = x(t_0) \\ f'(t_0) = \lim\limits_{t \to t_0^+} f'(t) = x'(t_0) \\ f(t_F) = \lim\limits_{t \to t_F^-} f(t) = x(t_F) \\ f'(t_F) = \lim\limits_{t \to t_F^-} f'(t) = x'(t_F) \\ \end{array}$$

Now, it is possible to treat the method of Eq. 14 and Eq. 16 as if it were a transform, defined as: $$\begin{array}{l l l} \mathring{\mathbb{F}}\{1\}_{(\omega)} & = & \mathbb{F}_{[t_0,\,t_F]}\{1\}_{(\omega)} =\displaystyle{ \int\limits_{t_0}^{t_F} e^{-j\omega t}\,dt} = \frac{j}{\omega}\cdot\left( e^{-j\omega t_F}- e^{-j\omega t_0}\right) \\ \mathring{\mathbb{F}}\{f(t)\}_{(\omega)} & = & \mathbb{F}_{[t_0,\,t_F]}\{f(t)\}_{(\omega)} =F(j\omega) \\ \mathring{\mathbb{F}}\{f'(t)\}_{(\omega)} & = & j\omega\,F(j\omega) + e^{-j\omega t_F}f(t_F)-e^{-j\omega t_0}f(t_0) \\ \mathring{\mathbb{F}}\{f''(t)\}_{(\omega)} & = & (j\omega)^2\,F(j\omega) + e^{-j\omega t_F}\left(f'(t_F)+j\omega f(t_F)\right)-e^{-j\omega t_0}\left(f'(t_0)+j\omega f(t_0)\right)\,\,\,\,\,\,\,\texttt{(Eq. 17)} \\ \end{array}$$ I haven't see these transforms before, but probably they already exists, so please share with me how they are named so I can look for any reference, but for now on, just in case I accidentally found something new, lets called them "Herreros' Transforms" (yes, because of ego $\texttt{XD}$).

As is said before, this transform will be useful to avoid the discontinuity at the edges of the compact-supported of the time-limited functions, but also, something interesting happens when applyed to ordinary linear differential equations:

Let $y(t)$ be a function defined by the following equation with initial conditions at time $t_i$, let $a, b, c \in \mathbb{C}$ arbitrary constants, and then let apply this "new" transform taking advantage of the fact that it inherits the linearity of the Fourier transform: $$\begin{array}{r c l} y'+by+c & = & 0,\,\,\,\,\,y(t_i), \,\,\,\,\,\,\Bigg/ \,\,\mathring{\mathbb{F}}\{\,\,\}\\ \mathring{\mathbb{F}}\{y'\}+b\,\mathring{\mathbb{F}}\{y\}+\mathring{\mathbb{F}}\{c\} & = & 0\\ j\omega\,Y(j\omega)+e^{-j\omega t_F}y(t_F)-e^{-j\omega t_0}y(t_0)+b\,Y(j\omega)+\frac{jc}{\omega}\,\left( e^{-j\omega t_F}- e^{-j\omega t_0}\right) & = & 0\\ Y(j\omega)(j\omega+b)+e^{-j\omega t_F}\left(y(t_F)+\frac{jc}{\omega}\right)-e^{-j\omega t_0}\left(y(t_0)+\frac{jc}{\omega}\right) & = & 0\\ Y(j\omega) & = & \displaystyle{\frac{e^{-j\omega t_0}\left(y(t_0)+\frac{jc}{\omega}\right) - e^{-j\omega t_F}\left(y(t_F)+\frac{jc}{\omega}\right)}{(j\omega+b)}}\\ Y(j\omega) & = & \displaystyle{\frac{e^{-j\omega t_F}\left(j\omega\,y(t_F)-c\right)- e^{-j\omega t_0}\left(j\omega\,y(t_0)-c\right)}{\omega\,(\omega-jb)}} \quad \texttt{(Eq. 18)} \end{array}$$

The same procedure could be done also for second order linear ordinary differential equations: $$\begin{array}{r c l} y'' +ay'+by+c & = & 0,\,\,\,\,\,y(t_i), y'(t_i), \,\,\,\,\,\,\Bigg/ \,\,\mathring{\mathbb{F}}\{\,\,\}\\ \mathring{\mathbb{F}}\{y''\}+a\,\mathring{\mathbb{F}}\{y'\}+b\,\mathring{\mathbb{F}}\{y\}+\mathring{\mathbb{F}}\{c\} & = & 0 \end{array}$$ $$(j\omega)^2\,Y(j\omega) + e^{-j\omega t_F}\left(y'(t_F)+j\omega y(t_F)\right)-e^{-j\omega t_0}\left(y'(t_0)+j\omega y(t_0)\right) +ja\omega\,Y(j\omega)+ae^{-j\omega t_F}y(t_F)-ae^{-j\omega t_0}y(t_0)+bY(j\omega)+\frac{jc}{\omega}\,\left( e^{-j\omega t_F}- e^{-j\omega t_0}\right) = 0 $$ $$ Y(j\omega)(-\omega^2+ja\omega+b)+e^{-j\omega t_F}\left(y'(t_F)+y(t_F)(j\omega+a)+\frac{jc}{\omega}\right)-e^{-j\omega t_0}\left(y'(t_0)+y(t_0)(j\omega+a)+\frac{jc}{\omega}\right) = 0 $$ $$\begin{array}{r c l} Y(j\omega) & = & \displaystyle{\frac{e^{-j\omega t_0}\left(y'(t_0)+y(t_0)(j\omega+a)+\frac{jc}{\omega}\right) - e^{-j\omega t_F}\left(y'(t_F)+y(t_F)(j\omega+a)+\frac{jc}{\omega}\right)}{(ja\omega-\omega^2+b)}}\\ Y(j\omega) & = & \displaystyle{\frac{e^{-j\omega t_F}\left(\omega\,y'(t_F)+\omega\,y(t_F)(j\omega+a)+jc\right)-e^{-j\omega t_0}\left(\omega\,y'(t_0)+\omega\,y(t_0)(j\omega+a)+jc\right)}{\omega\,(\omega^2-ja\omega-b)}} \quad \texttt{(Eq. 19)} \end{array}$$ Here the amazing thing (at least form me), is that $Y(j\omega)$ obtained by Eq. 18 and Eq. 19 are actually the Fourier Transforms of the time-limited versions with domain $[t_0,\,t_F]$ of the solution functions y(t) for the initial values $y(t_i),\,y'(t_i)$, obtained simple by formula without needing to evaluate a CONVOLUTION, which is MUCH easier at least for me: given $y(t)$ I take the derivatives of it, form its linear differential equation, and then just use the formulas of Eq. 18 or Eq. 19.

And also note that for these inhomogeneous linear differential equations for $y(t)$, if $b\neq 0$ and its solution is given by the form $y(t)=x(t)-c/b$ with $x(t)$ the solution of the homogeneous versions $x'+bx=0$ or $x''+ax'+bx=0$ for each case, in these scenarios, since $y(t_0)=x(t_0)-c/b$ and $y(t_F)=x(t_F)-c/b$ and the linearity of the Fourier Transform, I can write the solutions of Eq. 18 and Eq .19 using the same forms for $Y(j\omega)$ as: $$Y(j\omega) = X(j\omega)\Big|_{c=0}-\frac{jc}{b\omega}\left(e^{-j\omega t_F}-e^{-j\omega t_0}\right)$$ which will be useful to compare the results with the solutions delivered by Wolfram-Alpha.

I have already test these formulas with the functions $y(t)=2\,e^{-\frac{t}{2}},\,t\in [-1,1];\,$ $y(t)=\sin^2\left(\frac{t\pi}{2}\right),\,t\in [-\frac{3}{4},\frac{1}{4}];\,$ $y(t)=e^{-t}\sin(t)+\frac{1}{2},\,t\in [-\pi,4\pi];\,$ $y(t)=\pi e^{-5t},\,t\in [-\pi,-1];\,$ and $y(t)=6\,e^{-t}\cos\left(\frac{t\pi}{2}\right),\,t\in [-3,-2];\,$ and the formulas have worked perfectly, even when the domains don´t contain $0$ as a possible issue that could been rise because of the definitions of the Dirac's delta function used, so I am very confident they work at least for "traditional" functions (a formal prove is required, and in math "weird functions" could drop the assumptions, but at least for me are enough, and I can't make something more elaborated than these explanations).

Thinking now in the conditions that will make unlimited bandwidth signals to have a bounded slew rate, I think is interesting to analyze the case of Eq. 19 since linear differential equation of second order are of widespread use as approximation, as in the harmonic oscillator: its solutions in closed for are already known, but thinking this as an example of what can be done, let see Eq. 15 using the results of Eq. 19, then we will have that: $$ \max_t |x' \Delta\theta| \leq \frac{1}{2\pi} \int\limits_{-\infty}^\infty \left|\mathring{\mathbb{F}}\{ Y(j\omega)\} \right|dw $$

So, if we put attention to Eq. 19, since the angular frequency $\omega$ is defined as $\omega \in \mathbb{R}$, at least in the scenario when the $y'$ term is present (so $a\neq 0$), and when having $y(t_0)=y(t_F)=y'(t_0)=y'(t_F)=0$, the integrand will be a polynomial order $1/(\omega^2+\text{non-zero elements})$ so the solutions will have bounded slew rate. I have extended this topic into this and this questions.

So far, we this results I have convinced myself that really is there a physical law hiding in finding the conditions for which unlimited bandwidth signals will have bounded slew rate, but I don´t have the enough knowledge to analyze the convergence of this kind of integrals, so I hope someone could take this work and find it and share it here, since from it I believe better upper bounds could be found and with them at least I can built many new things.

Nevertheless, as example, have divergent Eq. 15 upper bound don´t directly means that the function has unbounded maximum slew rate, which can be seen with the function $y(t)=c_1+t\,c_2,\,\,t_0 \leq t \leq t_F $, which will have equation $y' = c_2$ a constant, so using Eq. 18 its upper bound will be the integral of a trigonometric function divided by $\omega$, so its result will be infinite, when instead by taking $\max_t |x' \Delta\theta| = c_2$ so its slew rate is actually bounded.

Also note that these transform of Eq. 17 allows to extend the traditional properties of the solutions of linear differential equations to these time-limited functions, so I wonder if this allows working with these time-limited functions, or if it can be extended to any kind of time-limited differential equations, as if they were a mathematical space as the Bump functions space $C_c^\infty$, so I left the question here. I don´t know enough about this topic but thinking in how probability spaces are made by cadlag functions CDFs, maybe is possible to think about them as piece-wise functions when removing their border-discontinuities.

But unfortunately, I don’t believe the transforms of Eq. 17 are a method to solve time-finite differential equations, but instead, a method to extract time-limited “pieces” of a already determined solution: if I have a differential equation with a determined solution $y(t)$ defined by its initial conditions at time $t_i$ (let say $y(t_i)$, $y’(t_i)$,… etc.), then I can choose any points $(t_0, y(t_0))$ and $(t_F, y(t_F))$ which lies in the specific solution $y(t)$ to use them in the formulas shown above, but if a pick arbitrary numbers to fill $y(t_0)$ and $y(t_F)$, I believe the result will be garbage (not sure if they will still lie in the specific $y(t)$ or not, maybe a iterative method to find them could be made, but I don´t look for them).

A “true” time-limited differential equation, since I will have compact-support, can´t be analytic, so its solution can be solved through a power series, neither can be represented by a linear differential equation, so the transforms of Eq. 17 will only remove the discontinuity at its domain borders after the solution have been already found (and if its values at the borders are finite), but nevertheless, maybe it could be useful to find terms to make some “matchings” among equations’ constants (I hope). I believe that as an example of a "true" time-limited differential equation, one can think in things like: $$ \frac{y''}{y'}+\frac{y'}{y}+\frac{2t+1}{t^2}=0,\,\,t_i = \frac{1}{1+\log(2)}\approx 0.59 ,\,\, y(t_i)=1,\,\, y(t_i)=-(\log(2)+1)^2 \approx -2.86 $$ which has as solution $$ y(t)= \sqrt{e^{\frac{1-t}{t}}-1} $$ that lives in the reals only for $t \in (0,1]$.

I don't know if these results are already known, or if I accidentally discovered them, in that case, please tell me so I can look for someone who can formalized them (thinking in mathematicians I met at the university many years ago), and if you would like to consider this introduction by yourself, I hope you consider me as the last author, it could help me if I try to work again on research, my profile is here.

If a third world country unemployed engineer -with too much free time- can discover that much for himself here, means that actually there is a lot to do related to this problem… I hope I have motivated you to try to find these $\text{mysterious conditions}\,\mathbb{X}$, prove the validity of these methods and bounds from above, use these tools to work with time-limited functions (maybe avoiding convolutions), and extend them, as example, maybe the transform $\mathring{\mathbb{F}}\{\,\}$ could be used to find new constant for the Kalman-Rota or the Landau-Kolmogorov-Hadamard inequalities.

I believe that at least any time-limited function for which: $$ \int\limits_{-\infty}^\infty \left|\mathring{\mathbb{F}}\left\{f(t) \right\}\right| dw < \infty $$ will have bounded maximum slew rate, $\text{mysterious conditions}\,\mathbb{X}$ I continue looking for here.

For all your comments and help, thanks you very much, and in special to user @LL3.14.