Is there a general formula for estimating the step size h in numerical differentiation formulas?

Using three-point central-difference formula

$$ f^{\prime}(x_0)\approx \frac{f(x_0+h)-f(x_0-h)}{2h} $$

and for $f(x)=\exp(x)$ at $x_0=0$ we have

$$ \begin{array}{c, l, r} h & f^{\prime}(0) & error \\ \hline 10^{-01} & 1.0017 & 1.6675\times 10^{-03} \\ 10^{-02} & 1 & 1.6667\times 10^{-05} \\ 10^{-03} & 1 & 1.6667\times 10^{-07} \\ 10^{-04} & 1 & 1.6669\times 10^{-09} \\ 10^{-05} & 1 & 1.2102\times 10^{-11} \\ 10^{-06} & 1 & -2.6755\times 10^{-11} \\ 10^{-07} & 1 & -5.2636\times 10^{-10} \\ 10^{-08} & 1 & -6.0775\times 10^{-09} \\ 10^{-09} & 1 & 2.7229\times 10^{-08} \\ 10^{-10} & 1 & 8.2740\times 10^{-08} \\ 10^{-11} & 1 & 8.2740\times 10^{-08} \\ 10^{-12} & 1 & 3.3389\times 10^{-05} \\ 10^{-13} & 9.9976\times 10^{-01} & -2.4417\times 10^{-04} \\ 10^{-14} & 9.9920\times 10^{-01} & -7.9928\times 10^{-04} \\ 10^{-15} & 1.0547 & 5.4712\times 10^{-02} \\ 10^{-16} & 5.5511\times 10^{-01} & -4.4489\times 10^{-01} \\ \end{array} $$

From $10^{-1}$ down to $10^{-5}$ the results are evident (because the rate of convergence of the three-point central-difference formula is $O(h^2)$). As you see because of the round-off error, the error deteriorate rapidly as $h$ decrease. My question is: Is there a general formula for estimating the step size $h$ in numerical differentiation formulas to get the best result?


Well, there is a method to estimate the optimal $h$ when a roundoff error is present. I'll show that for the central difference formula at hand, but you can repeat the same process for any similar method: Assume that for all $i \in \Bbb N$ holds : $|f^{(i)}| < M_i$. The function evaluations we have contain a roundoff error, denote it $\delta(x)$. Let us assume that $|\delta(x)| < \epsilon$. Denote the measured function value at $x_0$ as $\bar{f}(x_0)$ and the actual value as $f(x_0)$.

Then, we estimate the error

$$error(h) = \left|f'(x_0) - \frac{\bar{f}(x_0+h) - \bar{f}(x_0-h)}{2h}\right| = \left|f'(x_0) - \frac{f(x_0+h) - f(x_0-h)}{2h} + \frac{\delta(x_0+h) - \delta(x_0-h)}{2h}\right| \le \left|f'(x_0) - \frac{f(x_0+h) - f(x_0-h)}{2h}\right| + \left|\frac{\delta(x_0+h) - \delta(x_0-h)}{2h}\right| \le \frac{h^2M_3}{12} + \frac{2\epsilon}{2h} = \frac{h^2M_3}{12} + \frac{\epsilon}{h} := g(h) $$

Now, you can differentiate $g$ with respect to $h$ and find the $h$ that minimizes it.

In this case we get that $h_{opt} = \sqrt[3]{\frac{6\epsilon}{M_3}} \approx 10^{-6}$, assuming $\epsilon = 10^{-16}$ and a reasonable number for $M_3$.


Rule of thumb: take the relative step $\frac h{x_0}$ to be the square root of the machine ulp. $$h=x_0\sqrt{ulp}$$ The rationale is that the truncation and roundoff errors are then of the same order.

When $x_0=0$, no rule :(