Functions that are their Own nth Derivatives for Real $n$

Consider (non-trivial) functions that are their own nth derivatives. For instance

$\frac{\mathrm{d}}{\mathrm{d}x} e^x = e^x$

$\frac{\mathrm{d}^2}{\mathrm{d}x^2} e^{-x} = e^{-x}$

$\frac{\mathrm{d}^3}{\mathrm{d}x^3} e^{\frac{-x}{2}}\sin(\frac{\sqrt{3}x}{2}) = e^{\frac{-x}{2}}\sin(\frac{\sqrt{3}x}{2})$

$\frac{\mathrm{d}^4}{\mathrm{d}x^4} \sin x = \sin x$

$\cdots$

Let $f_n(x)$ be the function that is it's own nth derivative. I believe (but I'm not sure) for nonnegative integer $n$, this function can be written as the following infinite polynomial:

$f_n(x) = 1 + \cos(\frac{2\pi}{n})x + \cos(\frac{4\pi}{n})\frac{x^2}{2!} + \cos(\frac{6\pi}{n})\frac{x^3}{3!} + \cdots + \cos(\frac{2t\pi}{n})\frac{x^t}{t!} + \cdots$

Is there some sense in which this function can be extended to real n using fractional derivatives? Would it then be possible to graph $z(n, x) = f_n(x)$, and would this function be smooth and continuous on both $n$ and $x$ axes? Or would it have many discontinuities?


The set of functions $f$ such that $f^{(n)}-f=0$ is a vector space of dimension $n$, spanned by the functions $e^{\lambda t}$ with $\lambda$ an $n$th root of unity. In particular, there are many such functions, not just one: the general such function is of the form $$f(t)=\sum_{k=0}^{n-1}a_ke^{\exp(2\pi i k/n)t}.$$ This is explained in every text on ordinary diffential equations; I remember fondly, for example, Theory of Ordinary Differential Equations by Earl A. Coddington and Norman Levinson, but I am sure you can find more modern expositions in every library.


What you are interested here are what Spanier and Oldham term "cyclodifferential functions", functions that are regenerated after being differintegrated to the appropriate order (or to put it another way, functions that are eigenfunctions of the differintegration operator).

For the cyclodifferential equation

$${}_0 D_x^{\alpha}y=y$$

(or in more familiar notation, $$\frac{\mathrm{d}^{\alpha}y}{\mathrm{d}x^{\alpha}}=y$$, but the problem with this notation in the setting of differintegrals is that it neglects to take the lower limit that is present in both the Caputo and Riemann-Liouville definitions into account).

as you might have seen, $\exp(x)$ is a cyclodifferential function for ${}_0 D_x^1 y$ (i.e. $\exp(x)$ is its own derivative), $\cosh(x)$ and $\sinh(x)$ are cyclodifferentials for ${}_0 D_x^2 y$ (differentiating those two functions twice gives you the originals);

$$\frac1{\sqrt{\pi x}}+\exp(x)\mathrm{erfc}(-\sqrt{x})$$

($\mathrm{erfc}(x)$ is the complementary error function)

is a cyclodifferential for ${}_0 D_x^{\frac12} y$ (it is its own semiderivative), and in general

$$x^{\alpha-1}\sum_{j=0}^\infty \frac{C^j x^{\alpha j}}{\Gamma(\alpha(j+1))}$$

for $\alpha > 0$ and $C$ an appropriate eigenvalue, is a cyclodifferential for ${}_0 D_x^{\alpha}$. (This general solution can alternatively be expressed in terms of the Mittag-Leffler function)


The solutions to a homogeneous linear equation with constant coefficients $$ a_{n}y^{(n)} + a_{n-1}y^{(n-1)} + \cdots + a_1y' + a_0y = 0$$ correspond to roots of the "auxiliary polynomial" $a_nt^n + \cdots + a_0$. If the polynomial has roots $r_1,\ldots,r_k$ (in complex numbers), with multiplicities $a_1,\ldots,a_k$, then a basis for the solutions is given by $${ e^{r_1x}, xe^{r_1x},\ldots, x^{a_1-1}e^{r_1x},e^{r_2x},\ldots,x^{a_k-1}e^{r_kx}}.$$ Here, the complex exponential is used, so that if $a$ and $b$ are real numbers and $i$ is the square root of $-1$, we have $$e^{a+bi} = e^a(\cos(b) + i \sin(b)).$$

In your case, you are looking at the polynomial $t^n - 1 = 0$, whose roots are the $n$th roots of unity. They are all distinct; so you can either take the complex exponentials, or the real and complex parts. So if $\lambda$ is a primitive $n$th root of $1$, then a basis for the space of solutions is $$ e^x, e^{\lambda x}, e^{\lambda^2 x},\ldots,e^{\lambda^{n-1}x}.$$ The general solution is a linear combination of these with complex coefficients: $$f(x) = a_0e^x + a_1e^{\lambda x} + a_2e^{\lambda^2x}+\cdots+a_{n-1}e^{\lambda^{n-1}x},\qquad a_0,\ldots,a_{n-1}\in\mathbb{C}.$$ If you don't want complex values, you can take a general form as above, and take the real and complex parts separately.


Sorry to give yet another answer that does not address the issue of fractional $n$ [it seems that fractional derivatives are not such a familiar topic to many research mathematicians; certainly they're not to me], but:

There is a little issue here which has not been addressed. By the context of the OP's question, I gather s/he is looking for real-valued functions which are equal to their $n$th derivative (and not their $k$th derivative for $k < n$). Several answerers have mentioned that the set of solutions to $f^{n} = 0$ forms an $n$-dimensional vector space. But over what field? It is easier to identify the space of such complex-valued functions, i.e., $f: \mathbb{R} \rightarrow \mathbb{C}$: namely, a $\mathbb{C}$-basis is given by $f(x) = e^{2 \pi i k/n}$ for $0 \leq k < n$. But what does this tell us about the $\mathbb{R}$-vector space of real-valued solutions to this differential equation?

The answer is that it is $n$-dimensional as a $\mathbb{R}$-vector space, though it does not have such an immediately obvious and nice basis.

Let $W$ be the $\mathbb{R}$-vector space of real-valued functions $f$ with $f^{(n)} = 0$ and $V$ the $\mathbb{C}$-vector space of $\mathbb{C}$-valued functions $f$ with $f^{(n)} = 0$.

There is a natural inclusion map $W \mapsto V$. Phrased algebraically, the question is whether the induced map $L: W \otimes_{\mathbb{R}} \mathbb{C} \rightarrow V$ is an isomorphism of $\mathbb{C}$-vector spaces. In other words, this means that any given $\mathbb{R}$-basis of $W$ is also a $\mathbb{C}$-basis of $V$. This is certainly not automatic. For instance, viewing the Euclidean plane as first $\mathbb{R}^2$ and second as $\mathbb{C}$ gives a map $\mathbb{R}^2 \rightarrow \mathbb{C}$ which certainly does not induce an isomorphism upon tensoring with $\mathbb{C}$, since the first space has (real) dimension $2$ but the second space has (complex) dimension $1$.

For more on this, see Theorem 1.6 of

http://www.math.uconn.edu/~kconrad/blurbs/galoistheory/galoisdescent.pdf

It turns out that this is actually a problem in Galois descent: according to Theorem 2.14 of the notes of Keith Conrad already cited above, the map $L$ is an isomorphism iff there exists a conjugate-linear involution $r: V \rightarrow V$, i.e., i.e., a map which is self-inverse and satisfies, for any $z \in \mathbb{C}$ and $v \in V$, $r(zc) = \overline{z} r(c)$.
But indeed we have such a thing: an element of $V$ is just a complex-valued function $f$, so we put $r(f) = \overline{f}$. Note that this stabilizes $V$ since the differential equation $f^{(n)}) = 0$ "is defined over $\mathbb{R}$": or more simply, the complex conjugate of the $n$th derivative is the $n$th derivative of the complex conjugate. Thus we have "descent data" (or, in Keith Conrad's terminology, a G-structure) and the real solution space has the same dimension as the complex solution space.

It is a nice exercise to use these ideas to construct an explicit real basis of $W$.