Divided differences and differentiability
Let $f: R \rightarrow R$, $y_0,y_1,y_2 \in R$. We define divided differences: $$[y_0;f]=f(y_0),$$ $$[y_0,y_1;f]=\frac{f(y_1)-f(y_0)}{y_1-y_0},$$ $$[y_0,y_1,y_2;f]=\frac{[y_1,y_2;f]-[y_0,y_1;f]}{y_2-y_0}.$$
Assume that for each $x \in R$ and $\varepsilon>0$ there exists a $\delta>0$ such that $$|[y_0,y_1,y_2;f]-[y_0',y_1',y_2';f]|< \varepsilon$$ for arbitrary sets $\{y_0,y_1,y_2\}$, $\{y_0',y_1',y_2'\}$ of distinct points such that $|y_i-x|<\delta$, $|y_i'-x|<\delta$ ($i=0,1,2$).
Is it $f$ of class $C^2$ ?
Solution 1:
A. First order case. By definition of the first derivative, we have $$ [x,z;f]\to f'(x), \qquad\textrm{as}\quad z\to x, $$ at each $x$. In the following, let us fix $x$. By assumption, for any $\varepsilon>0$, there exists $\delta>0$ such that $$ |[x,z;f]-[y,t;f]|<\varepsilon, \qquad\textrm{for all}\quad z,y,t\in B_\delta(x), $$ where $B_\delta(x)=\{t:|t-x|<\delta\}$. Now we fix $\varepsilon>0$, and will show that with $\delta>0$ given by assumption as above, it holds for each $y\in B_\delta(x)$ that $$ |f'(x)-f'(y)|<3\varepsilon. $$ To prove the claim, first, pick $z\in B_\delta(x)$ satisfying $$ |[x,z;f]-f'(x)|<\varepsilon. $$ Then, pick $t\in B_\delta(x)$ satisfying $$ |[y,t;f]-f'(y)|<\varepsilon. $$ Combining the above inequalities, we get the claim.
B. Second order case. One can prove as in Einar's answer that the divided differences converge to some function $g$ pointwise. Then the argument from Part A shows that $g$ is continuous. So the only remaining question is the existence of $f''$ and whether or not $f''=2g$.
Let $G$ be a function satisfying $G''=2g$. Then it is well known that the second order divided differences of $G$ converge pointwise to $g$. Hence by linearity, the same divided differences of $u=f-G$ converge pointiwse to $0$. If we could show that $u''$ exists everywhere and $u''\equiv0$, this would imply that $f''$ exists and $f''=2g$ everywhere. Therefore from now on, we can assume $g\equiv0$. We have $$ [x,y,z;f]=\frac{[x,y;f]-[y,z;f]}{x-z}=\frac{[y,x;f]-[y,z;f]}{x-z}\to0, $$ as $\max\{|x-y|,|z-y|\}\to0$, for each $y$. This means $$ [y,x;f]-[y,z;f]=o(\delta), $$ with the $o(\delta)$-term being uniform in $x,z\in B_\delta(y)$. In particular, $[y,x;f]$ is convergent as $x\to y$, and so $f'(y)=[y,y;f]$ exists everywhere. Now the idea is to consider $$ [x,x_1,y;f]+[x,y_1,y;f]=\frac{[x,x_1;f]-[y,x_1;f]+[x,y_1;f]-[y,y_1;f]}{x-y}, $$ which goes to $0$ as $x_1,y_1,y\to x$, and somehow take the limits $x_1\to x$ and $y_1\to y$, before taking the other limits.
C. Proof that $f''$ exists and $f''\equiv0$. Let $x$ be a point and let $\varepsilon>0$. Then pick $\delta>0$ such that $|[x,z,y;f]|<\varepsilon$ for any $y,z\in B_\delta(x)$. We will show that $$ \left|\frac{f'(x)-f'(y)}{x-y}\right|<5\varepsilon, \qquad\textrm{for any}\quad y\in B_\delta(x)\setminus\{x\}. $$ To this end, choose $x_1,y_1\in B_\delta(x)$ such that $$ |f'(x)-[x,x_1;f]|<\varepsilon|x-y|, \qquad |f'(y)-[y_1,y;f]|<\varepsilon|x-y|, $$ and that $$ |[x,y_1;f]-[x_1,y;f]|<\varepsilon|x-y|. $$ This is possible because $[x,x_1;f]\to f'(x)$ as $x_1\to x$, $[y,y_1;f]\to f'(y)$ as $y_1\to y$, and $[z,t;f]$ is a continuous function of $z$ and $t$. Note the higher order smallness in the right hand sides, which formalizes "taking the limits $x_1\to x$ and $y_1\to y$ first". Finally, we use the simple decompositions $$ f'(x) = \left(f'(x)-[x,x_1;f]\right) +\left([x,x_1;f]-[x_1,y;f]\right) +[x_1,y;f], $$ and $$ -f'(y) = -[x,y_1;f] +\left([x,y_1;f]-[y_1,y;f]\right) +\left([y_1,y;f]-f'(y)\right), $$ to conclude that $$ \left|\frac{f'(x)-f'(y)}{x-y}\right| < \varepsilon+|[x,x_1,y;f]|+\varepsilon+|[x,y_1,y;f]|+\varepsilon <5\varepsilon. $$
Solution 2:
I'll try to provide a proof of the general case $[y_0,\ldots,y_n;f]$. This proof has many similarities to that of timur, but I'll formulate things more in terms of limits rather than using $\epsilon$s and $\delta$s. My other answer was wrong, but I leave it since it had some useful bits, one of which I'll redo here.
Hope I didn't overlook anything grave this time. However, I'm sure I've taken a few technical short-cuts which could need filling in: the lemma provided at the end is an attempt at that.
Notation: Let $R^{n+1}_k\subset\mathbb{R}^{n+1}$ consist of the points $y=(y_0,\ldots,y_n)$ for which at most $k$ coefficients have the same value: i.e. $R^{n+1}_1$ are those where all $y_i$ are distinct, while $R^{n+1}_{n+1}=\mathbb{R}^{n+1}$. We then have $[\cdot;f]:R^{n+1}_1\rightarrow\mathbb{R}$ defined as $$ [y_0,\ldots,y_n;f]=\frac{[y_1,\ldots,y_n;f]-[y_0,\ldots,y_{n-1};f]}{y_n-y_0} \tag{1} $$ for $y\in R^{n+1}_{1}$: generalised from the cases $n=0,1,2$ stated in the original problem.
Definition: We say that a function $f$ is $\mathcal{F}^n$ if for any $y,y'\in R^{n+1}_1$, $$\lim_{y,y'\rightarrow x^{[n+1]}}\Big|[y;f]-[y';f]\Big|=0\tag{2}$$ where, for $x\in\mathbb{R}$, the point $x^{[n+1]}=(x,\ldots,x)\in\mathbb{R}^{n+1}$. This is just the equivalent of the $\epsilon$-$\delta$ formulation in terms of limits.
(i) Assume $f$ is $\mathcal{F}^n$. If we take any sequence $y^{(k)}\in R^{n+1}_1$ which converges to $x^{[n+1]}$, then $[y^{(k)};f]$ becomes a Cauchy sequence: i.e. for $p,q\ge k$, $\big|[y^{(p)};f]-[y^{(q)};f]\big|\rightarrow0$ as $n\rightarrow\infty$. Hence, $[f^{(k)};f]$ must converge. Let's denote the limit $[x^{[n+1]};f]=g_n(x)$.
(ii) Let's prove by induction that $[y_0,\ldots,y_n;f]$ is symmetric in the $y_i$. Obviously, it holds for $n=0,1$. Assume it holds for $n<N$. Setting $n=N\ge2$, let $y=(u,v,A,w)\in R^{n+1}_1$ where $A=(a_1,\ldots,a_{n-2})$ (possibly empty). We then have $$ \begin{split} [u,v,A,w;f]=&\frac{[u,v,A;f]-[v,A,w;f]}{u-w}\\ =&\frac{1}{u-w}\Big([u,A,v;f]-[v,A,w;f]\Big)\\ =&\frac{1}{u-w}\left(\frac{[u,A;f]-[A,v;f]}{u-v}-\frac{[v,A;f]-[A,w;f]}{v-w}\right)\\ =&\frac{[u,A;f]}{(u-v)(u-w)}+\frac{[v,A;f]}{(v-u)(v-w)}+\frac{[w,A;f]}{(w-u)(w-v)} \end{split} $$ which is symmetric in $u,v,w$. From the first step, we also see that it must be symmetric in permutations of $v,A$. These symmetries generate the whole permutation group, so symmetry also holds for $n=N$.
(iii) Again, we use induction, assuming the following holds for $n<N$: A function $f$ is $\mathcal{F}^n$ if and only if it is $C^n$, i.e. $\mathcal{F}^n=C^n$: in particular, it is also $\mathcal{F}^k$ for all $k<n$. The definition of $[y;f]$ for $y\in R^{n+1}_1$ may be extended continuously to all $y\in\mathbb{R}^{n+1}$. Moreover, $g_n(x)=f^{(n)}(x)/n!$ where $f^{(n)}$ is the $n$th derivative.
For $n=0,1$, the asumption holds true. So we let $n=N\ge2$.
In order to utilise the induction hypothesis, we first need to show that when $f$ is $\mathcal{F}^n$, it is also $\mathcal{F}^{n-1}$. To do this, we rewrite equation (1) $$(u-v)[u,A,v;f]=[u,A;f]-[A,v;f]$$ for $A=(a_1,\ldots,a_{n-1})$. For $u=(u_1,\ldots,u_n)$ and $v=(v_1,\ldots,v_n)$, both in $R^n_1$, we then get $$ \begin{split} (u_1-v_1)[u_1,\ldots,u_n,v_1;f]&+(u_2-v_2)[u_2,\ldots,u_n,v_1,v_2;f]\\ &+\cdots+(u_n-v_n)[u_n,v_1,\ldots,v_n;f]\\ =&[u_1,\ldots,u_n;f]-[v_1,\ldots,v_n;f]\\ \end{split}\tag{3} $$ where $u,v\rightarrow x^{[n]}$ leads to $\big|[u;f]-[v;f]\big|\rightarrow0$, i.e. the definition of $f$ being $\mathcal{F}^{n-1}$.
For any $y\in R^{n+1}_n$, i.e. not all $y_i$ are equal, we can rearrange the $y_i$ to make $[y;f]=[u,A,v;f]$ where $u\not=v$ and use (1) to get $[y;f]$ expressed in terms of $[u,A;f]$ and $[A,v;f]$, both of which are defined on the whole $\mathbb{R}^n$. This provides us with an extension of $[y;f]$ to all $y\in R^{n+1}_n$.
Finally, in equation (3), we let $u=x^{[n]}$, $v=x'^{[n]}$: by continuity, we can do that, or alternatively we can take limits $u\rightarrow x^{[n]}$ and $v\rightarrow x'^{[n]}$ (and we take these limits before the later limit). Dividing by $x-x'$ on both sides then gives $$ \frac{g_{n-1}(x)-g_{n-1}(x')}{x-x'} =[x,\ldots,x,x';f]+\cdots+[x,x',\ldots,x';f] $$ where $x'\rightarrow x$ gives $g_{n-1}'(x)=n\cdot[g_n(x)]$. Hence, $g_n(x)=f^{(n)}/n!$.
The only remaining thing to point out is that the final extension of $[y;f]$ from $y\in R^{n+1}_n$ to all of $\mathbb{R}^{n+1}$ is continuous. This basically follows from the definition, but we have to use the following lemma (which I have implicitly used further up as well), to ensure the limit in (2) holds true even if the sequence even if $y\in R^{n+1}_n$ rather than $R^{n+1}_1$.
Lemma: Given $x,v_i,u^{(i)}_j\in\mathbb{R}$ so that $v_i\rightarrow x$ and $u^{(i)}_{j}\rightarrow v_i$, then we can pick $j_i$ so that $u^{(i)}_{j_i}\rightarrow x$.
One use of the lemma is that if $y^{(i)}\in R^{n+1}_n$, $z^{(i,j)}\in R^{n+1}_1$, we can set $v_i=[y^{(i)};f]$, $u^{(i)}_j=[z^{(i,j)};f]$, and replace the limit of $[y^{(i)};f]$, which is taken in $R^{n+1}_n$, with a limit taken in $R^{n+1}_1$.
Solution 3:
NB: The main argument in this answer, that the clause does not imply $C^2$, is wrong.
I'll leave the answer since it contains some useful arguments about the existence of a limit ($[y_0,\ldots,y_n;f]\rightarrow g_n(x)$ as $y\rightarrow x$) as well as some examples high-lighting the difference between having a derivative and it being continuous.
As has been commented already, the expression is essentially the definition of second derivative, however there are some technical problems (which I realised from the comment by timur).
For any $n$, having $$\big|[y_0,\ldots,y_n;f]-[y'_0,\ldots,y'_n;f]\big|<\epsilon$$ for all $y_i, y'_i$ sufficiently close to $x$, guarantees that there is a limit $g_n(x)$ with $$\big|[y_0,\ldots,y_n;f]-g_n(x)\big|<\epsilon.$$ E.g. if you pick a sequence of $n+1$-tuples $y^{(k)}=(y^{(k)}_0,\ldots,y^{(k)}_n)$ with $|y^{(k)}_i-x|<\delta_k$, where $\delta_k\rightarrow0$ as $\epsilon_k\rightarrow0$, by plugging in $y=y^{(k)}$ and $y'=y^{(l)}$ for some $l>k$, we find that $[y^{(k)};f]$ is a Caucy sequence, so we know it converges.
I'm fairly confident the limiting function will be be $g_n(x)=f^{(n)}(x)/n!$, where $f^{(n)}$ is the $n$th derivative, provided the $n$th derivative exists. (I thought I had an example where the limit exists but $f$ does not have $n$th derivative, which I claimed in a previous version of this answer, but this was not correct.)
For $f$ to be $C^n$, a function must not only have an $n$th derivative, but it must be continuous. This need not be the case, and it fails already at $n=1$ with $f(x)=x^2\sin(1/x)$ (setting $f(0)=\lim_{x\rightarrow 0}f(x)=0$). This has $f'(x)=0$ due to the $x^2$ factor. Accordingly, $[y_0,y_1;f]\rightarrow 0$ when $y_i\rightarrow 0$. However, $f'(x)=2x\sin(1/x)-\cos(1/x)$ which is not continuous at $x=0$ as $\cos(1/x)$ fluctuates between $1$ and $-1$.
For $n=2$, I previously used $f(x)=x^3\sin(1/x)$ as a counterexample: this is $C^1$, but does not have a second derivative at $x=0$. However, my claim that get$[y_0,y_1,y_2;f]\rightarrow 0$ as $y_i\rightarrow 0$ does not seem to be correct.
Instead, for $n=2$, we can set $f(x)=x^4\sin(1/x)$. This has second derivative defined everywhere, but which is not continuous.
My main doubt now is that it seems to me $|[y_0,\ldots,y_n;f]-g_n(x)|<\epsilon$ not only ensures that $f$ has an $n$th derivative in $x$, but that it also bounds the $n$th derivative in a neighbourhood of $x$ (by the same $\epsilon$--$\delta$ limit). Hence, the proposal in the original problem statement should be true.