Looking for finite difference approximations past the fourth derivative

Solution 1:

For even orders the finite-difference derivative approximation has a simple form.

Consider following finite-difference operator $\Delta$ $$ \Delta f(x) = \frac{f(x+h/2) - f(x-h/2)}{h}. $$ It's a second order first derivative operator. You can apply it several times $$ \Delta^2 f(x) = \Delta \Delta f(x) = \Delta \left(\frac{f(x+h/2) - f(x-h/2)}{h}\right) = \frac{f(x+h) - f(x)}{h^2} - \frac{f(x) - f(x-h)}{h^2}= \frac{f(x+h) - 2f(x) + f(x-h)}{h^2} \sim f''(x). $$ Note that $\Delta$ can be written using shift operator $T_a = \exp\left(a \frac{d}{dx}\right)$ $$ \Delta = \frac{T_{h/2} - T_{-h/2}}{h} = \frac{e^{\frac{h}{2}\frac{d}{dx}} - e^{-\frac{h}{2}\frac{d}{dx}}}{h}. $$ Thus powers of $\Delta$ can be easily expressed using binomial theorem (note that $T_a$ and $T_b$ commute) $$ \Delta^{2k} = \frac{1}{h^{2k}}\sum_{m=0}^{2k} \begin{pmatrix}2k\\m\end{pmatrix} T_{h/2}^{2k-m} (-1)^m T_{-h/2}^{m} = \frac{1}{h^{2k}}\sum_{m=0}^{2k} \begin{pmatrix}2k\\m\end{pmatrix} (-1)^m T_{(2k-m)h/2} T_{-mh/2} = \\ =\frac{1}{h^{2k}}\sum_{m=0}^{2k} \begin{pmatrix}2k\\m\end{pmatrix} (-1)^m T_{(k-m)h} = \frac{1}{h^{2k}} \sum_{m=-k}^k \begin{pmatrix}2k\\k+m\end{pmatrix} (-1)^{m+k} T_{mh}. $$ Thus $$ \Delta^{2k} f(x) = \frac{1}{h^{2k}} \sum_{m=-k}^{k} \begin{pmatrix}2k\\k+m\end{pmatrix} (-1)^{m+k} f(x + mh). $$ This will be a second order formula, since it represents $2k$ times applying second order formula to $f(x)$. Actually, the truncation error is $$ \Delta^{2k} f(x) - f^{(2k)}(x) = \frac{kh^2}{12} f^{(2k+2)}(x) + O(h^4). $$

But from practical point of view, the formulas for high order derivatives (maybe more than 4) are almost never used. Let $f(x)$ be evaluated on a machine with limited number of binary digits. Let's assume double precision, which has $53$ bits of mantissa. Thus every $f(x)$ value would have also small error $f(x) \pm \epsilon |f(x)|$ with $\varepsilon =2^{-53} \approx 10^{-16}$. Applying $\Delta^{2k}$ to this function will yield $$ \Delta^{2k} (f(x) + \delta f(x)) = \Delta^{2k} f(x) + \Delta^{2k} \delta f(x) = f^{(2k)}(x) + \epsilon_{\text{trunc}} + \Delta^{2k} \delta f(x). $$ For the $\delta f$ it is known that $|\delta f(x)| \leq \varepsilon |f(x)|$, but the sign can be arbitrary. Thus the best we can write for $\Delta^{2k} \delta f(x)$ is $$ \left|\Delta^{2k} \delta f(x)\right| \leq \frac{1}{h^{2k}} \sum_{m=-k}^{k} \begin{pmatrix}2k\\k+m\end{pmatrix} |(-1)^{m+k}| \cdot |\delta f(x + mh)| \leq \frac{\epsilon \max |f(x)|}{h^{2k}} \sum_{m=-k}^{k} \begin{pmatrix}2k\\k+m\end{pmatrix} = \\ = 2^{2k}\frac{\varepsilon \max |f(x)|}{h^{2k}}. $$ Note that the bound is tight, it's not hard to construct such $\delta f(x)$. Since $\epsilon_{\text{trunc}} \leq \frac{kh^2}{12}\max |f^{(2k+2)|}(x)$ the total error consists of two terms $$ \epsilon \leq \frac{kh^2}{12} M_{2k+2} + 2^{2k}\frac{\varepsilon M_0}{h^{2k}}. $$ Here for simplicity $M_k = \max |f^(k)(x)|$. From this estimate one can find the optimal $h_\star$ and minimal achievable $\epsilon_\star$: $$ 0 = \frac{d\epsilon}{dh} = \frac{kh}{6}M_{2k+2} - 2^{2k+1}k \frac{\varepsilon M_0}{h^{2k+1}}\\ h_\star^{2k+2} = \frac{3\varepsilon 2^{2k+2}M_0}{M_{2k+2}}, \qquad h_\star = 2 \sqrt[2k+2]\frac{3\varepsilon M_0}{M_{2k+2}}\\ \epsilon_\star = \frac{kh_\star ^2}{12} M_{2k+2} \left( 1 + 2^{2k+2}\frac{\varepsilon M_0}{h_\star^{2k+2}}\frac{3}{k M_{2k+2}} \right) = \frac{kh_\star ^2}{12} M_{2k+2} \frac{k+1}{k} = \frac{(k+1)h_\star ^2}{12} M_{2k+2} $$ Here's a table for different $k$ and $M_0 = M_{2k+2} = 1$ ($f(x) = \sin x$ for example), $\varepsilon = 2^{-53}$: $$ \begin{array}{ccc} k & h_\star & \epsilon_\star\\ \hline 1 & 2.70186\times 10^{-4} & 1.21667\times 10^{-8} \\ 2 & 5.26565\times 10^{-3} & 6.93176\times 10^{-6} \\ 3 & 2.32459\times 10^{-2} & 1.80124\times 10^{-4} \\ 4 & 5.66609\times 10^{-2} & 1.33769\times 10^{-3} \\ 5 & 1.02622\times 10^{-1} & 5.26565\times 10^{-3} \\ 6 & 1.56854\times 10^{-1} & 1.43519\times 10^{-2} \\ 7 & 2.1562\times 10^{-1} & 3.09945\times 10^{-2} \\ 8 & 2.76166\times 10^{-1} & 5.72009\times 10^{-2} \\ 9 & 3.36633\times 10^{-1} & 9.44348\times 10^{-2} \\ 10 & 3.9583\times 10^{-1} & 1.43625\times 10^{-1} \\ \end{array} $$ Here's also results from numerical experiments with $k=2$ (fourth order) and $k=6$ (12th order) enter image description here enter image description here

Note that 12th order formula hardly gave two significant digits of the derivative, while 4th order gave 6. Optimal values agree with theoretical values pretty well. To get at least 5 right digits for the 12th derivative you need double-double (128 bytes) floating point arithmetic.

Note that if you need such finite difference for some ODE, for example $$ y^{(12)}(x) + a y^{(6)}(x) + b y'(x) + c y(x) = f(x) $$ you can always rewrite it as a system of the first order with 12 unknows $$ y_{11}'(x) + a y_6(x) + b y_1(x) + c y_0(x) = f(x)\\ y_{10}'(x) = y_{11}(x)\\ \vdots\\ y_{0}'(x) = y_1(x). $$

Solution 2:

Here's what I've found on the topic

Notes on the derivation of Finite Differences kernels, on regularly spaced grids, using arbitrary sample points

Some tables on Finite differences can be found here on wiki

There is a paper that discusses the generation of the co-efficients used

Generation of Finite Difference Formulas on Arbitrarily Spaced Grids

The interesting thing about this paper is that the gird can be arbitrarily spaced. Note that this paper presents an algorithm for computing the co-efficients.

I've found this

$\delta^n_h[f](x)=\Sigma_{i=0}^{n} (-1)^iC_i^nf(x+(\frac{n}{2} -i)h)$

on wikipedia although I'm still searching for a derivation.