Is there a function with the property $f(n)=f^{(n)}(0)$?

Is there a not identically zero, real-analytic function $f:\mathbb{R}\rightarrow\mathbb{R}$, which satisfies

$$f(n)=f^{(n)}(0),\quad n\in\mathbb{N} \text{ or } \mathbb N^+?$$

What I got so far:

Set

$$f(x)=\sum_{n=0}^\infty\frac{a_n}{n!}x^n,$$

then for $n=0$ this works anyway and else we have

$$a_n=f^{(n)}(0)=f(n)=\sum_{k=0}^\infty\frac{a_k}{k!}n^k.$$

Now $a_1=\sum_{k=0}^\infty\frac{a_k}{k!}1^k=a_0+a_1+\sum_{k=2}^\infty\frac{a_k}{k!},$ so

$$\sum_{k=2}^\infty\frac{a_k}{k!}=-a_0.$$

For $n=2$ we find

$$a_2=\sum_{k=0}^\infty\frac{a_k}{k!}2^k=a_0+a_1+2a_2+\sum_{k=3}^\infty\frac{a_k}{k!}2^k.$$

The first case was somehow special since $a_1$ cancelled out, but now I have to juggle around with more and more stuff.

I could express $a_1$ in terms of the higher $a's$, and then for $n=3$ search for $a_2$ and so on. I didn't get far, however. Is there a closed expression? My plan was to argue somehow, that if I find such an algorythm to express $a$'s in terms of higher $a$'s, that then, in the limit, the series of remaining sum or sums would go to $0$ and I'd eventually find my function.

Or maybe there is a better approach to such a problem.


Let complex number $c$ be a solution of $e^c=c$. For example $c = -W(-1)$, where $W$ is the Lambert W function. Then since function $f$ defined by

$$ f(x) = \sum_{n=0}^\infty \frac{e^{cn}x^n}{n!} $$

evaluates to $e^{e^c x} = e^{cx}$, we have $f(n) = e^{cn} = f^{(n)}(0)$. For a real solution, let $c = a+bi$ be real and imaginary parts and let $g(x)$ be the real part of $f(x)$. More explicitly:

$$ g(x) = \sum_{n=0}^\infty \frac{e^{an}\cos(bn)\;x^n}{n!} $$

evaluates to $e^{ax}\cos(bx)$. With the principal branch of Lambert W, this is approximately:

$$ g(x) = e ^{0.3181315052 x} \operatorname{cos} (1.337235701 x) $$


(not yet an answer, but too long for a comment)

upps, I see there was a better answer of G.Edgar crossing. Possibly I'll delete this comment soon

For me this looks like an eigenvalue-problem.
Let's use the following matrix and vector-notations. The coefficients of the power series of the sought function f(x) are in a columnvector A.
We denote a "vandermonde"-rowvector V(x) which contains the consecutive powers of x, such that $\small V(x) \cdot A = f(x) $
We denote the diagonal vector of consecutive factorials as F and its reciprocal as f
Then we denote the matrix which is a collection of all V(n) of consecutive n as ZV . Then we have first
$\qquad \small ZV \cdot A = F \cdot A $
and rearranging the factorials
$\qquad \small f \cdot ZV \cdot A = A $
which is an eigenvalue-problem to eigenvalue $\small \lambda=1 $ . Thus we have the formal problem of solving
$\qquad \small \left(f \cdot ZV - I \right) \cdot A = 0 $
However, at the moment I do not see how to move to the next step...


[added] The solution of G. Edgar gives the needed hint

If we do not rearrange, but expand:

$\qquad \small \begin{eqnarray} ZV \cdot A &=& F \cdot A \\ ZV \cdot (f \cdot F)\cdot A &=& F \cdot A \\ (ZV \cdot f) \cdot (F \cdot A) &=& (F \cdot A) \end{eqnarray} $

we get a better ansatz. Let's denote simpler matrix constants $\small W=ZV \cdot f \qquad B=F \cdot A $ and rewrite this as
$\qquad \small W \cdot B = B $.
Now W is the carleman-matrix which maps $\small x \to \exp(x) $ by

$\qquad \small W \cdot V(x) = V(\exp(x)) $

Thus if $\small x = \exp(x) $ for some x , so x is some (complex) fixpoint $\small t_k$ of $\small f(x)=\exp(x)$ and we have one possible sought identity:

$\qquad \small W \cdot V(t_k) = V(t_k) \to B = V(t_k) \to A = f \cdot B = f \cdot V(t_k) $

Then the coefficients of the power series are

$\qquad \small f(x) = 1 + t_k x + t_k^2/2! x^2 + t_k^3/3! x^3 + \ldots $

and $\small f(x) = \exp(t_k \ x) $

Because there are infinitely many such fixpoints (all are complex) we have infinitely many solutions of this type (there might be other types, the vectors $\small V(t_k) $ need not be the only type of possible eigenvectors of W )