# The Two-Step Adams-Bashforth Method with Different Stepsizes

The Adams-Bashforth family of numerical methods has a well-known derivation but I couldn’t find a source which gave the two-step method in the case that the two stepsizes are different. Why do I want that? The two-step method requires two initial points. The second point is often calculated using a Euler step, but as the Euler method is $\mathcal{O(h^1)}$ I want to make this first step small to avoid introducing a large global error. The third point is then calculated with the Adams-Bashforth method with different step sizes. From then on the Adams-Bashforth method can be used as usual. Another use might be in an adaptive stepsize method, where we want to adjust the stepsizes as we go.

## An Initial Condition ODE Problem

Say we have an ordinary differential equation $ y’ = f(t,y(t)) $ with an initial condition $y(t_0) = y_0$ and we want to solve it numerically. If we know $y(t)$ at a time $t_n$ and want to know what $y$ is at a later time $t_{n+1}$, the fundamental theorem of calculus tells us that we find it by integrating $y’$ over the time interval,

The idea behind any ODE solver is to compute the right-hand-side integral for
some numerical approximation of $f$. The problem is then computed over a series
of steps $n = 1, 2, \dots N$ to give a sequence of points $y_n$ which
approximate $y(t)$ to some order of accuracy as a function of the stepsize. The
method is *consistent* if the local error (i.e. the error from step $n$ to step
$n+1$) goes to zero faster than the stepsize $(t_{n+1} - t_n)$ goes to zero.

## Polynomial Interpolation

Where the Euler method takes the slope $f$ to be a constant on the interval
$[t_n, t_{n+1}]$, the idea behind *Adams-Bashforth* methods is to approxmiate
$f$ by a Lagrange interpolating polynomial:

where

Here $P(t)$ is the polynomial of degree $\le (m-1)$ that passes through the $m$ points $(t_1, y_1 = f(t_1))$, $(t_2, y_2 = f(t_2))$ $\dots$ $(t_m, y_m = f(t_m))$. We’ll take the linear $(m = 2)$ interpolant on the point $t_{n}$ and an earlier point $t_{n-1}$, so we have

Now if we put this approximating polynomial into the integral above, we find

## Step Sizes

If we let $h_1 := t_n - t_{n-1}$ and $h_2 := t_{n+1} - t_n$ then

Putting this back into the approximation, we get

and our sequence of approximation points $y_n$ is calculated as

for $n = 1, 2, \dots N$. If the steps are of *equal size*, i.e. $h := h_1 = h_2$
we find

which is the standard two-step Adams-Bashforth method.

## Accuracy

Replacing $f(t,y(t))$ with the interpolant $P(t)$ incurs a global error of order $\mathcal{O}(h^m)$, so in the case of the two- step method we have $\mathcal{O}(h^2)$.

Note that if you follow the same derivation with $m = 1$ you get the Euler method — so the Euler method is also in fact the one-step Adams-Bashforth method.

## Python Implementation of the Method

We’ll now define a function to implement the two-step Adams-Bashforth method for a system of first-order ODEs. Below we’ll try it out on a test equation.

## A Test Problem: The Exponential

To test our solver, let’s take a simple ODE: $y’ = ay$ with intial value $y(0) = 1$ and $a \in \mathbb{C}$. We know the analytic solution of this equation is $y = \mathrm{e}^{at}$ so we can check the accuracy of the method against this.

Now we’ve defined our solver and a test method, we can check that the method works for some example parameters. Here we’ll set $a = 1$ and solve over $t \in [0, 5]$.

Next we’ll solve the system for different numbers of steps: $N = 5, 10, 20$ (corresponding to $h = 1, \frac{1}{2}, \tfrac{1}{4}$). Then we’ll plot the results alongside the known analytic result $y = \mathrm{e}^{t}$.

For now, note that we’re leaving the first Euler step the same size as all the subsequent steps.

From the plot above the method implementation looks good: the numerical solution behaviour follows the known analytic (black dashed line) solution, and is converging on it as we increase the number of steps. We can check that the order of the approximation is indeed $\mathcal{O}(h^2)$ by plotting a function of the global error at $t=5$, given by $| \, y_N - e^5 \, |$, over a large range of stepsizes.

Plotting the global error (blue line) against stepsize on a logarithmic scale,
we see that the slope is constant below $h = 0.1$, which tells us that the order
is constant. We could find that order formally by fitting this logarithm, but
it’s good enough for now to compare the global error with the function $h^2$
(black dashed line). That the slopes are the same indicates visually that the
method is $\mathcal{O}(h^2)$. (Try comparing with $h^1$ or $h^3$ by changing
`order_check`

in the code above to $1$ or $3$.)

## Changing the First Step Size

Finally, remember that the reason we derived the two-step Adams-Bashforth method with different stepsizes was so we could make the first Euler $\mathcal{O}(h^1)$ step smaller. This step will otherwise introduce a large error which will carry through the subsequent Adams-Bashforth steps. How small should we make it?

We’ll use the test ODE system `exp`

as above but this time keep the main
stepsize fixed at $h = 10^{-2}$. Then we’ll compare taking the first Euler
stepsize $h_0$ from $10^{-2}$ down to $10^{-6}$.

We’ll plot the global error on a semilogarithmic axis.

As we’d expect, the global error decreases as the contribution from the first error is brought down by reducing the first stepsize. But if we zoom in on the y-axis, we see that we can only bring it down so far before the error from the remaining steps begins to dominate.

So we see that reducing the size of the first step down from $h_0 = h = 10^{-2}$
reduces the global error down to around $h_0 = 10^{-4}$. In general, if we
**make the Euler step an order of magnitude smaller** we should bring the local
error from this step in line with that of the first Adams-Bashforth steps.

Comments or corrections welcome, by or tweet.