Fractional Calculus: Motivation and Foundations.
If this is too broad, I apologise; let's keep it focused on the basics if necessary.
What's the motivation and the rigorous foundations behind fractional calculus?
It seems very weird & beautiful to me. Did it arise from some set of applications? If so (and even if not), here's a suitable question concerning its "physical meaning" and history.
The Wikipedia article makes it look quite clear-cut: stick $\Gamma$ into Cauchy's formula for repeated integration. But why can we do that? Why is it listed under "Heuristics"? I know the Gamma function generalises the factorial, but that's as much as I understand.
"Why ask?"
Because I like to see how different areas of Mathematics fit together. I like the way fractional calculus seems to take integration & differentiation and ask, "well, do we really need to do these things a natural number of times?" - and so on. So I'm just curious :)
I think there is more to be found out about this subject. There are many different ways to define fractional derivatives and integrals. I do not know if these come from any deep, fundamental facts, but certainly as a generalization of various formulas. Another way to think about the subject is a list of applications or tricks involving a certain integral transforms, happenstance called "fractional derivatives". But let's think about clear generalizations of integration and differentiation.
For instance, there is indeed the Cauchy formula generalization using $$f^{(n)}(z) = \frac{n!}{2\pi i} \int_C \frac{f(w)}{(w-z)^{n+1}} dz,$$ but there are many more. It is know that for integer $n$ that $$\int_a^x \int_a^{x_1} \cdots \int_a^{x_{n-1}} f(x_n)d x_n = \frac{1}{(n-1)!} \int_a^x (x-t)^{n-1}f(t)dx,$$ where we've integrated on the left $n$ times. Here we might substitute fractional $q$ for $n$ and the gamma function for the factorial to define a fractional integral. That is, $$I^q f(x) = \frac{1}{\Gamma(q)} \int_a^x (x-t)^{q-1} f(t) dt.$$ Note that this integral may not converge for negative $q$. Hence, for fractional derivatives, we have two choices. If we let $I^q$ be the $q$th order integral operator and $D$ the regular differentiation operator, then for a given $q>0$ and integer $n>q$, we could define $$D^qf = D^{n}I^{n-q} f,$$ which is known as the Riemann-Liouville definition, or in the opposite order $$D^qf = I^{n-q} D^n f,$$ which is known as the Caputo definition. But there are more! By an induction argument, one can easily show that $$f^{(n)}(x) = \lim_{h \to 0} \frac{1}{h^n} \sum_{k=0}^n (-1)^k {n \choose k} f(x-kh).$$ We can generalize by replacing $n$ with fractional $q$ and change the upper bound to $\infty$ to get $$f^{(q)}(x) = \lim_{h \to 0} \frac{1}{h^q} \sum_{k=0}^\infty (-1)^k {q \choose k} f(x-kh)$$ where ${q \choose k}$ is defined as usual. Or, we can think in terms of the gamma function: ${q \choose k} = \frac{\Gamma(q+1)}{\Gamma(q-k+1) \Gamma(k+1)}$. Surprisingly, it turns out that we can develop a very similar formula using Riemann sums on a series of repeated integrals. It complicate things slightly, but this means integration and differentiation can be united under one formula. This is the Grunwald-Letnikov definition.
Still more! If we know anything about Fourier series, the original repeated integral generalization is no big surprise since it essentially looks like a convolution. we know that Laplace transforms and Fourier transforms both turn differentiation and integration into multiplication or division by $s$. So we may also define something like $$D^q f(t) = F^{-1}[k^q F[f(t)]]$$ or $$D^q f(t) = L^{-1}[s^q L[f(t)]].$$ These are the Fourier and Laplace generalizations. I think most people, if there were to guess at a generalization, would pick these one.
Some of these definitions are equivalent, some are different, some nearly the same up to annoying details. For instance, the Riemann Liouville definition is the same as the Grunweld-Letnikov definition. I know of a proof in Oldham's and Spanier's book. There's still more to think about. We want to see how many traditional calculus rules still apply. I've seen some new papers on Arxiv about fractional calculus. One is a proof that the Leibniz's rule can never hold. Another is apparently new definition of a fractional diffointegral operator I saw mentioned in this paper.
Is there some natural definition of fractional calculus? The Gamma function, used in several of these definitions, is itself one of many (theoretical) generalizations of the factorial function. We could ask the same question for it, except, the Gamma function is the sole log convex function that appropriately generalizes the factorial function. This is the result of the Bohr Mollerup theorem. So a similar question would be, has any one come up with an appropriate, reasonable, constraint on a fractional integral operator that is unique? I don't know. Perhaps some results have been shown, but I have not been made aware of them.
So my intuitive grasp of the subject is this. We want to understand all possible definitions of fractional derivatives and want to consider what properties they have or don't have, compared to traditional calculus. It is certainly true that fractional derivatives will make sense only for some classes of functions, as in the case of the Fourier transform, so it may be context specific.
I've heard tell of models using fractional calculus of materials with fractional dimensions. I've also seen a paper generalizing Newton's second law, not with real physical consequence, but for the sake of seeing what the math is. But my impression is that some modeling is done. What is certain is that people are studying fractional differential equations, including equations such as the fractional diffusion-wave equation (which unites the heat and wave equation through a fractional time derivative) and the fractional schroedinger equation which has a fractional spatial derivative. So we have some interesting equations to solve, which is nice, but I also think that there is a world of function identities to compute with fractional derivatives involving all sorts of special functions when one sits down to compute things. (For one fun result, the linear fractional differential equation $y^{(\pi)}+y=0$, using the right definition, has an infinite number of linearly independent solutions!)
In this answer, I explain how Euler defined $\zeta(-n)$ for $n\geq 0$. Namely, he defined
$$\zeta(-n):= (1-2^{n+1})^{-1} \frac{d^n}{dx^n}\frac{e^x}{1+e^x}\biggr|_{x=0}$$
This does not seem to suggest a definition of $\zeta(-s)$ for $\Re s\geq 0$; that is, unless we can make sense of $\frac{d^s}{dx^s}$ for a complex number $s$... and indeed, if you write down the integral suggested by the theory of fractional calculus, you will get the analytic continuation of the zeta function!
The bibliography about fractional calculus is extensive. After some precursors during the 18th century, the most striking advances are the ones of J.Liouville in several of it’s reports to the Ecole polytechnique in Paris between 1832 and 1835, then the contribution of B.Riemann in 1847, making that the names of these two mathematicians remain attached to the famous transform which is at the heart of differ-integral calculus.
More recently, the fractional calculus not only has important applications in pure mathematics (let us quote Erdélyi and Higgings, among number of authors), but also interests vast domains of the physical sciences. Heaviside was a brilliant precursor who, from 1920, used fractional calculus in the researches on the electromagnetic propagation [Oliver Heaviside, Electromagnetic Theory, 1920, re-édit.: Dover Pub., New York, 1950.]. Numerous examples are quoted in the book "The Fractional Calculus", Keith B.Oldham, Jerome Spanier, Academic Press,New York, 1974. For example concerning rheology, diffusion, hydrodynamics, thermodynamics, electrochemistry, etc.
In the paper "The fractionnal derivation" published on Scribd (tranlation pp.7-12) : http://www.scribd.com/JJacquelin/documents a formal generalization of some basic relationships for electrical components, thanks to the fractional derivation, appears on the table below (copy from the referenced paper, p.11). It had remarkable consequences on networks made of associations of these components, and on calculations of equivalent networks. This is more developed in the paper "The Phasance Concept" (same link on Sribd).
Quite beautifully, fractional/nonlocal operators $\mathcal G$ are the natural notion of differentiation to describe stochastic processes with jumps.
If you write out the generator $\mathcal G$ of a Markov chain $P=\{p_{i,j}\}_{i,j\in \text{State space}}$ (which is intrinsically jumpy) what you obtain is $$ \mathcal G f(x)=(P-I)f(x)=\sum_{y\in\text{State space}}(f(y)-f(x))p_{x,y}. $$ The intuition is clear, the infinitesimal jump (unit time in this case) from $x$ to $y$ is assigned intensity/probability $p_{x,y}$. Similarly the fractional Laplacian $\mathcal G=-(-\Delta)^{\frac{\alpha}{2}}$ is the generator of a symmetric-$\alpha$ stable Lévy process, and it is given by $$ \mathcal G f(x)=-(-\Delta)^{\frac{\alpha}{2}}f(x)=\int_{\mathbb R^{n}}(f(y)-f(x))C_{n,\alpha}\frac{dy}{|y|^{n+\alpha}}. $$
The Caputo derivative $\mathcal G=\ ^{C}D^{\beta}_a$ of order $\beta\in(0,1) $ has the (generator) representation \begin{align} ^{C}D^{\beta}_a f(x)&= \int_0^{a-x}(f(x+y)-f(x))\nu(y)dy +(f(a)-f(x))\int_{a-x}^\infty\nu(y)dy, \end{align} where $\nu(y):=C_\beta y^{-1-\beta}$, and $x<a$. Here the intuition is that $^{C}D^{\beta}_a$ is the generator of the the process a stable process $X^{\beta}(s)$ absorbed once crossing the barrier $\{a\}$. A more detailed intuition is:
consider the process $X^{\beta}$ starting at $x<a$, then the first term sums up all the intensities $\nu(y)$ for every jump form $x$ to $y$ falling below $a$ ($0\le y\le a-x$) , indeed $$\text{sum}_y\ (f(x+y)-f(x)) \nu(y);$$ The second term case is (i) a standard killing term $-f(x)$ times a (unbounded) coefficient $b(x):=\int_{a-x}^{\infty}\nu(y)dy,$ which contains all the intensity of the jumps that would have fallen above $a$ (for the process starting at $x$), and (ii) a regenerating term $+f(a)b(x)$ with the same coefficient/sum of the intensities, which puts/absorb the process at $\{a\}$.
Hence the Caputo operator, is not only nonlocal, but it contains boundary information! For a probabilistic proof that follows this intuition see the Appendix here.
Hence, as you like to study second order operators such as the Laplacians, you equivalently like to study many second order fractional differential operators. Using as a motivation the successes of probability in the local operators case, fractional operators offer very concrete probabilistic intuition (think about the intuition the Brownian motion provides for the heat equation), which makes models worth being applied. Also new interesting properties arise from nonlocality and boundary information (like memory and trapping phenomena); interesting properties for both theory and applications, (and often their probabilistic features is what allows you to both uncover these properties and give intuition).
(Sorry for possibly sounding too probabilistically enthusiastic, but the answers above seem to ignore the probabilistic aspects, hence I thought it would be fair to to push it a bit)
- Why were exponents generalized to fractional values ?
- Why was Newton's binomial theorem generalized to fractional exponents ?
- Why was the factorial generalized to fractional arguments through the $\Gamma$ function ?
- Why is usually anything generalized in mathematics ?