Does fourth-order Runge-Kutta have an higher accuracy than the second-order one?

I'm writing a presentation on modelling fluid flow. We used Runge-Kutta second order to describe the flow as a numerical method.

I just want verify that Runge-Kutta fourth order would be of a higher degree of accuracy - I can't find it anywhere online.

I'm 99% sure but seeing as it counts towards my final mark, I figured I'd ask the intellects of MSE to verify this.


Solution 1:

RK4 in $100$ steps of step size $0.01$ will give an integration over time $T=1$ with an error of magnitude $10^{-8}$ with $400$ evaluations of the system function.

A second order method with the same number of steps will give an error of magnitude $10^{-4}$. With the same number of $400$ function evaluations you get $h=0.005$ and an error of about $2.5·10^{-5}$. To get an error of about $10^{-8}$ would need a step size $h=10^{-4}$ with $20 000$ function evaluations.


Note that there are lower limits for the step size. They depend on the integration time interval and the scale of the system function, but roughly for computations using the double type and and order $p$ method, the step size should never be lower than $10^{-15/(p+1)}$, for RK4 thus $10^{-3}$.


To argue about the general error behavior one does not need the details of the method, only their error order. However, any cookbook for numerical methods has RK4, it is not that complicated.

You only need to know that the error of an order $p$ method is proportional to $M_pTh^p$, where $M_p$ is maximum over the coefficient of $h^p$ in the local discretization error, i.e., the difference of $(y_{k+1}-y_k)/h$ to the same quotient for an exact solution.

That is, $M_p$ stands for the scale of the system function and $T$ for the integration interval. The more exact estimate is $M_p/L·(e^{LT}-1)h^p$ with $L$ the Lipschitz constant of the system.


Practical example

Striving for an error level of $10^{-4}$ over the integration interval $T=1$ demands (rounded) $10$ steps for RK4. Second order Heun replaces each of these $10$ steps with again $10$ steps to reach the same error level. Which replaces $40$ evaluations of the function with $200$. The first order Euler method would need $10\,000$ steps and thus function evaluations, $1000$ per RK4 step. That this drastic difference plays out also in actual computations shows the following table, where additionally the methods other than Euler are also tuned to contain the error level $10^{-7}$. The last column computes the coefficient in the formula error = C*h^p with p the order of the method. That it is about constant shows that p is indeed the order.

Euler:  h=2.50e-02, N=   40;  y[N]=9.15192033, y[N]-y(1)=-1.8008e-01, N^1*(y[N]-y(1))=-7.2032
Euler:  h=5.00e-03, N=  200;  y[N]=9.29508303, y[N]-y(1)=-3.6918e-02, N^1*(y[N]-y(1))=-7.3836
Euler:  h=1.00e-04, N=10000;  y[N]=9.33125831, y[N]-y(1)=-7.4293e-04, N^1*(y[N]-y(1))=-7.4293
----
 Heun:  h=1.00e-02, N=  100;  y[N]=9.33143081, y[N]-y(1)=-5.7042e-04, N^2*(y[N]-y(1))=-5.7042
 Heun:  h=2.00e-03, N=  500;  y[N]=9.33197820, y[N]-y(1)=-2.3033e-05, N^2*(y[N]-y(1))=-5.7582
 Heun:  h=2.00e-04, N= 5000;  y[N]=9.33200100, y[N]-y(1)=-2.3081e-07, N^2*(y[N]-y(1))=-5.7703
----
  RK2:  h=1.00e-02, N=  100;  y[N]=9.33168345, y[N]-y(1)=-3.1778e-04, N^2*(y[N]-y(1))=-3.1778
  RK2:  h=2.00e-03, N=  500;  y[N]=9.33198843, y[N]-y(1)=-1.2806e-05, N^2*(y[N]-y(1))=-3.2016
  RK2:  h=2.00e-04, N= 5000;  y[N]=9.33200111, y[N]-y(1)=-1.2828e-07, N^2*(y[N]-y(1))=-3.2070
----
  RK3:  h=5.00e-02, N=   20;  y[N]=9.33173585, y[N]-y(1)=-2.6538e-04, N^3*(y[N]-y(1))=-2.1231
  RK3:  h=4.00e-03, N=  250;  y[N]=9.33200109, y[N]-y(1)=-1.4362e-07, N^3*(y[N]-y(1))=-2.2441
  RK3:  h=2.00e-04, N= 5000;  y[N]=9.33200123, y[N]-y(1)=-1.8000e-11, N^3*(y[N]-y(1))=-2.2500
----
  RK4:  h=1.25e-01, N=    8;  y[N]=9.33182269, y[N]-y(1)=-1.7854e-04, N^4*(y[N]-y(1))=-0.7313
  RK4:  h=2.00e-02, N=   50;  y[N]=9.33200110, y[N]-y(1)=-1.3406e-07, N^4*(y[N]-y(1))=-0.8379
  RK4:  h=1.00e-03, N= 1000;  y[N]=9.33200123, y[N]-y(1)=-8.3134e-13, N^4*(y[N]-y(1))=-0.8313

These values are for the scalar differential equation given by

from math import sin, cos, exp

def yprime(t,y):
    return y*(2-sin(t));

def y_exact(t):
    return 2*exp(2*t+cos(t)-1)

def y_init(t):
    return y_exact(t);


t0 = 0;
y0 = y_exact(t0);
y1 = y_exact(t0+1);

The steps of the methods used are

def Euler_step(f,t,y,h):
    return y+h*f(t,y);

def Heun_step(f,t,y,h):
    k1=f(t  , y     );
    k2=f(t+h, y+h*k1);
    return y+0.5*h*(k1+k2);

def RK2_step(f,t,y,h):
    k1=f(t      , y         );
    k2=f(t+0.5*h, y+0.5*h*k1);
    return y+h*k2;

def RK3_step(f,t,y,h):
    k1=f(t      , y         );
    k2=f(t+0.5*h, y+0.5*h*k1);
    k3=f(t+    h, y+    h*(2*k2-k1));
    return y + h*(k1+4*k2+k3)/6;

def RK4_step(f,t,y,h):
    k1=f(t      , y         );
    k2=f(t+0.5*h, y+0.5*h*k1);
    k3=f(t+0.5*h, y+0.5*h*k2);
    k4=f(t+    h, y+    h*k3);
    return y + h*(k1+2*k2+2*k3+k4)/6;

The values-and-errors tables are built then with the code

methods = { "Euler":Euler_step, "Heun": Heun_step, "RK2": RK2_step, "RK3":RK3_step, "RK4":RK4_step };

def test_method(name, order, subdivisions):
    stepper = methods[name]
    for N in subdivisions:
        h = 1.0/N;
        y=y0;
        for k in range(N):
            y = stepper(yprime, t0+k*h, y, h)

        print "%5s:  h=%.2e, N=%5u;  y[N]=%.8f, y[N]-y(1)=%.4e, N^%d*(y[N]-y(1))=%.4f" % (name, h, N, y, y-y1, order, N*(y-y1));
    print "----"    

test_method("Euler", 1, [  40, 200, 10000]);
test_method( "Heun", 2, [ 100, 500,  5000]);
test_method(  "RK2", 2, [ 100, 500,  5000]);
test_method(  "RK3", 3, [  20, 250,  5000]);
test_method(  "RK4", 4, [   8,  50,  1000]);