How to proof the following function is always constant which satisfies $f\left( x \right) + a\int_{x - 1}^x {f\left( t \right)\,dt} $?

The proof below works for $a<1$. Bugs are still being sorted out in the generalization.


First, note that $\int_{x-1}^x f(t)dt$ is differentiable, so $f(x) = C - \int_{x-1}^x f(t)dt$ is differentiable.

Next, we have $f'(x) = af(x-1) - af(x)$, so $|f'(x)| \leq a(|f(x-1)| + |f(x)|)$. Since $f$ is bounded, this implies that $f'$ is bounded.

The crux: Fix $x$. By the mean value theorem, there is some $y\in (x-1,x)$ with $f'(y) = f(x)-f(x-1)$, so $f'(x) = -af'(y)$.

Continuing in this manner, we find, for any $k$, some $y_k$ with $f'(x) = (-a)^k f'(y_k)$. Letting $k\to\infty$, and using the fact that $f'$ is bounded, we find that $f'(x) = 0$. This holds for all $x$, so $f$ is constant.


PhoemueX and I have both concluded that Taylor's Theorem proves that $f$ is analytic. It's not obvious if this helps, but it feels significant, so I thought I'd include it.


As seen in the post of @Slade, there is an easy proof for the case $a\in\left(0,1\right)$. The general case is more involved and the only proof I found uses concepts from Fourier Analysis and the theory of tempered distributions. I will try to explain these a bit, but I think it will not really be understandable if you have never heard of these things. So here goes:

By assumption, $f$ is bounded and thus defines a tempered distribution. It is easy to see (cf. the answer of @Slade for example) that $f$ is continuously differentiable with bounded derivative $$ f'\left(x\right)=a\cdot\left[f\left(x-1\right)-f\left(x\right)\right]=a\cdot\left(T_{1}f-f\right)\left(x\right), $$ where I used the notation $\left(T_{x}f\right)\left(y\right)=f\left(y-x\right)$ for the translation of $f$ by $x$. For the boundedness, simply observe $$ \left|f'\left(x\right)\right|\leq a\cdot\left[\left|f\left(x-1\right)\right|+\left|f\left(x\right)\right|\right]\leq2a\cdot\left\Vert f\right\Vert _{\sup}<\infty. $$ Hence, $f\in W^{1,\infty}\left(\mathbb{R}\right)$, so that standard arguments show that the distributional derivative of $f$ coincides with the ordinary derivative of $f$, considered as a tempered distribution.

In the following, I use the convention $$ \widehat{g}\left(\xi\right)=\int_{\mathbb{R}}g\left(x\right)\cdot e^{-2\pi i\xi x}\, dx $$ for the Fourier transform of a function. For a Schwartz-function $g\in\mathcal{S}\left(\mathbb{R}\right)$, partial integration yields \begin{eqnarray*} \widehat{g'}\left(\xi\right) & = & \int_{\mathbb{R}}g'\left(x\right)\cdot e^{-2\pi i\xi x}\, dx=-\int_{\mathbb{R}}g\left(x\right)\cdot\frac{d}{dx}e^{-2\pi i\xi x}\, dx\\ & = & 2\pi i\xi\cdot\int_{\mathbb{R}}g\left(x\right)e^{-2\pi i\xi x}\, dx=2\pi i\xi\cdot\widehat{g}\left(\xi\right). \end{eqnarray*} By duality, we conclude $$ 2\pi i\xi\cdot\widehat{f}=\widehat{f'}=\mathcal{F}\left(a\cdot\left[T_{1}f-f\right]\right)=a\cdot\left(M_{-1}\widehat{f}-\widehat{f}\right)=a\cdot\left(e^{-2\pi i\xi}-1\right)\cdot\widehat{f}, $$ where $\left(M_{\omega}g\right)\left(x\right)=e^{2\pi i\omega x}\cdot g\left(x\right)$ denotes the modulation of $g$. It is important to note here that $\widehat{f}$ might not actually be a (pointwise defined) function, but is merely a tempered distribution. The distribution $2\pi i\xi\cdot\widehat{f}$ acts on Schwartz functions by $$ \left\langle 2\pi i\xi\cdot\widehat{f},g\right\rangle _{\mathcal{S}',\mathcal{S}}=\left\langle \widehat{f},2\pi i\xi\cdot g\right\rangle _{\mathcal{S}',\mathcal{S}}. $$ By rearranging the above identity, we see $$ \underbrace{\left(a\cdot\left[1-\cos\left(2\pi\xi\right)\right]+i\left(2\pi\xi+a\sin\left(2\pi\xi\right)\right)\right)}_{=:\gamma\left(\xi\right)}\cdot\widehat{f}=\left(2\pi i\xi-a\cdot\left(e^{-2\pi i\xi}-1\right)\right)\cdot\widehat{f}=0.\qquad\left(\dagger\right) $$ For $\xi\in\mathbb{R}$ with $\gamma\left(\xi\right)=0$, we see (by only looking at the real part and using $a\neq0$) that $\cos\left(2\pi\xi\right)=1$ and hence $\xi\in\mathbb{Z}$. But this yields $\sin\left(2\pi\xi\right)=0$, so that (by looking at the imaginary part), we conclude $0=2\pi\xi+a\sin\left(2\pi\xi\right)=2\pi\xi$ and hence $\xi=0$. Note that this also holds in the case $a=0$. Hence, $\gamma\left(\xi\right)\neq0$ for all $\xi\in\mathbb{R}\setminus\left\{ 0\right\} $.

This implies that the support of the tempered distribution $\widehat{f}$ is a subset of $\left\{ 0\right\} $, because for $g\in C_{c}^{\infty}\left(\mathbb{R}\right)$ with $0\notin{\rm supp}\left(g\right)$, we have $$ \frac{1}{\gamma}\cdot g\in C_{c}^{\infty}\left(\mathbb{R}\right), $$ because $\gamma$ is smooth and only vanishes at the origin, so that $$ \left\langle \widehat{f},g\right\rangle _{\mathcal{S}',\mathcal{S}}=\left\langle \gamma\cdot\widehat{f},\frac{1}{\gamma}\cdot g\right\rangle _{\mathcal{S}',\mathcal{S}}\overset{\left(\dagger\right)}{=}\left\langle 0,\frac{1}{\gamma}\cdot g\right\rangle _{\mathcal{S}',\mathcal{S}}=0. $$

But it is well-known (see for example this post Distributions supported on a single point which sadly is missing an answer at the moment) that the only distributions which are supported at the origin are (in dimension $n=1$) of the form $\sum_{0\leq\alpha\leq k}c_{\alpha}\cdot\partial^{\alpha}\delta_{0}$ for some $k\in\mathbb{N}$ and suitable coefficients $c_{\alpha}\in\mathbb{C}$, where $\delta_{0}$ is the Dirac Delta-distribution at $0$, whose derivative has again to be understood in the sense of tempered distributions, i.e. by $$ \left\langle \partial^{\alpha}\delta_{0},g\right\rangle _{\mathcal{S}',\mathcal{S}}=\left(-1\right)^{\alpha}\cdot\left(\partial^{\alpha}g\right)\left(0\right). $$ But it is well known (see for example About the k-th derivative of the Delta function) that the inverse Fourier transform of $\partial^{\alpha}\delta_{0}$ is a constant multiple of $\xi^{\alpha}$. Hence, $$ f=\mathcal{F}^{-1}\widehat{f}=\mathcal{F}^{-1}\left(\sum_{0\leq\alpha\leq k}c_{\alpha}\cdot\partial^{\alpha}\delta_{0}\right)=\sum_{0\leq\alpha\leq k}\left[c_{\alpha}\cdot\mathcal{F}^{-1}\left(\partial^{\alpha}\delta_{0}\right)\right] $$ is a polynomial.

But by assumption, $f$ is bounded (on the real line) and the only bounded polynomials are the constant functions. Hence, $f\equiv{\rm const}$.

It is worth noting that this proof is valid for all $\alpha\in\mathbb{R}$, not only for $\alpha>0$. Also, the same proof shows that $f$ is a polynomial even if we only assume that $f$ yields a tempered distribution (i.e. without necessarily assuming that $f$ is bounded).

BTW: It would be interesting to know the actual context in which this problem/exercise occured/was posed, to see if there was some (intended) connection to Fourier analysis.

Once one has seen the idea, it is pretty natural, because the Fourier transform behaves so well with respect to differentiation and translation.


Below, we will see that the function $f$ is actually the restriction of an entire function to $\Bbb{R}$, i.e. the sume of a convergent power-series with infinite radius of convergence. Once this is shown, the (sadly rather downvoted) answer of @Leucippus actually becomes a valid argument.

For this, let $K := \Vert f \Vert_\sup$. By assumption, $K < \infty$. We will show by induction on $n \in \Bbb{N}_0$ that $\Vert f^{(n)} \Vert_\sup \leq K \cdot |2a|^n$ for all $n \in \Bbb{N}_0$. For $n=0$ this is trivial.

As also noted in the other posts, continuity of $f$ implies that $x \mapsto \int_{x-1}^x f(t) dt$ is continuously differentiable, which implies that $f$ is continuously differentiable with $f'(x) = -a \cdot (f(x) - f(x-1))$.

By induction, $$f^{(n)}(x) = -a \cdot [f^{(n-1)}(x) - f^{(n-1)}(x-1)].$$

Indeed, for $n=1$, this is what we just noted. In the induction step, we get

$$ f^{(n+1)}(x) = \frac{d}{dx} (-a) \cdot [f^{(n-1)}(x) - f^{(n-1)}(x-1)] = (-a) \cdot [f^{(n)}(x) - f^{(n)}(x-1)]. $$

By induction hypothesis, this yields

$$ |f^{(n+1)}(x)| \leq |a| \cdot [|f^{(n)}(x)| + |f^{(n)}(x-1)|] \leq 2|a| \cdot \Vert f^{(n)} \Vert_\sup \leq |2a|^{n+1}. $$

But this implies

$$ \sum_{n=0}^\infty \left|\frac{f^{(n)}(0) \cdot x^n}{n!} \right| \leq \sum_{n=0}^\infty \frac{(2|ax|)^n}{n!} = \exp(2|ax|) < \infty, $$

so that the power series $$g(x) := \sum_{n=0}^\infty \frac{f^{(n)}(0) \cdot x^n}{n!}$$ around $0$ of $f$ converges absolutely on all of $\Bbb{R}$.

It remains to show $f = g$. To this end, note that the Lagrange form of the remainder for Taylor's formula (see http://en.wikipedia.org/wiki/Taylor's_theorem#Explicit_formulae_for_the_remainder) yields

$$ |f(x) - g(x)| \xleftarrow[k \to \infty]{} \left| f(x) - \sum_{n=0}^k \frac{f^{(n)}(0) x^n}{n!} \right| = \left| \frac{f^{(k+1)}(\xi_L)}{(k+1)!} \cdot |x|^{k+1}\right| \leq \frac{(2|xa|)^{k+1}}{(k+1)!} \xrightarrow[k\to\infty]{} 0, $$

because even $\sum_k \frac{(2|xa|)^{k+1}}{(k+1)!} = \exp(2|xa|) < \infty$. Hence, $f = g$ is the restriction of an entire function to $\Bbb{R}$.


Starting with \begin{align} f(x) + a \int_{x-1}^{x} f(t) \, dt = c \end{align} then, by differentiation, \begin{align} f'(x) + a \left( f(x) - f(x-1) \right) = 0. \end{align} Now, for this equation to be satisfied consider it in the form of \begin{align} f'(x) = B = -a ( f(x) - f(x-1) ). \end{align} From the equation $f'(x) = B$ it is seen that $f(x) = Bx+c_{1}$. Now, \begin{align} f(x) - f(x-1) &= - \frac{B}{a} \\ (Bx+c_{1}) - (Bx - B + c_{1}) &= - \frac{B}{a} \\ B &= - \frac{B}{a}. \end{align} If this is to be satisfied then $B = 0$ which then implies that $$f(x) = c_{1}.$$


The alternative is that given the equation \begin{align} f'(x) + a (f(x) - f(x-1) ) = 0 \end{align} then let $f(x) = a_{0} + a_{1} x + a_{2} x^{2} + \cdots$ to obtain \begin{align} 0 &= [ a_{1} + 2 a_{2} x + \cdots ] + a [ a_{0} + a_{1} x + a_{2} x^{2} + \cdots ] - a[ a_{0} + a_{1} (x-1) + a_{2} (x-1)^{2} + \cdots] \\ &= [ a_{1} + 2 a_{2} x + \cdots] + a[ a_{1} + a_{2}(2x-1) +\cdots] \\ &= [ a_{1} - a(a_{2} - a_{3} + a_{4} - \cdots )] + [ 2 a_{2} + a(2 a_{2} - 3 a_{3} + \cdots) ] x + [ 3(1+a)a_{3} + \cdots ] x^{2} + \cdots . \end{align}
All the coefficients of $x^{n}$, $0\leq n$, are zero which yields $a_{1} = a_{2} = \cdots = 0$. This then leaves $f(x) = a_{0}$ which is a constant.