Using Adams-Bashforth-Moulton Predictor Corrector with Adaptive Step-size

1.1k Views Asked by At

I'm investigating the behaviour of predictor-corrector methods to numerically give approximations to the Initial Value Problem.

I have currently implemented a Forward-Backward euler method like so,

$$\text{Predictor:}\quad u_{i+1}^*=u_i+h_if(u_i, t_i),\\ \text{Corrector:}\quad u_{i+1} = u_i + h_if(u_{i+1}^*,t_{i+1}),$$

with a variable step size $h_i$ which is calculated by the formula,

$$h_{i+1}=h_i \left \lvert \frac{\text{tolerance}}{T_{i+1}}\right\rvert ^{(p+1)^{-1}},$$

where $p$ is the order of the method (in this case $p=1$) and $T_{i+1}$ is the half the difference of the predictor and corrector (approximation for LTE).

My question comes when wanting to use 2,3-step Adams Bashforth-Moulton predictor-corrector combinations. I know how to calculate $p$, but I don't understand how, if I am changing the step-size at each step, I use the iterative schemes correctly. For example, take the Adams Bashforth 2 method for our predictor, with fixed step size

$$\text{Predictor}:\quad u_{i+1}=u_i+\frac{h}{2}\left ( 3f(u_i, t_i) - f(u_{i-1}, t_{i-1}) \right ).$$

What happens if the step size is adapted between step $i$ and step $i+1$. For clarity, let $h_i=t_{i+1}-t_i$. Do I need to factor in each step's length into our scheme (1)? Or Do I need to recompute previous intermediat u and derivative values using some kind of Runge-Kutta(2)?

$$(1):\quad u_{i+1}=u_i+\frac{3h_i}{2}f(u_i,t_i) - \frac{h_{i-1}}{2}f(u_{i-1},t_{i-1})$$

1

There are 1 best solutions below

0
On BEST ANSWER

Adams' methods are interpolatory schemes which are derived from looking to approximate the integrand of the integral equation,

$$u_{n+1}=u_{n}+\int_{t_n}^{t_{n+1}} f(t, u(t)) dt,$$

where $\frac{du(t)}{dt} = f(t,u(t))$ describes an initial value problem with a corresponding initial condition. $f$ is approximated by a polynomial (Lagrange, Newton, etc.) which interpolates to the $k$ number of previously computed values.

Since the $2$-step Adams-Bashforth method formula quoted uses a fixed step size $h$ in its derivation the method must be derived using (potentially) different step sizes. We will use the $n-1$ and $n$ points to build the Lagrange interpolating polynomial $p(t)$,

$$f(t, u(t)) \approx p(t) = f(t_n, u_n)\frac{t-t_{n-1}}{t_{n}-t_{n-1}} + f(t_{n-1}, u_{n-1})\frac{t-t_{n-1}}{t_{n-1}-t_n}.$$

Using this and integrating yields the adaptive-step 2-step Adams-Bashforth scheme,

$$u_{n+1}=u_n+ \frac{h_{n-1}}{2 h_{n}} \left ( (2h_{n-1} + h_{n})f(t_n, u_n) - h_{n}f(t_{n-1}, u_{n-1}) \right ),$$

where $h_{n-1} = t_n - t_{n-1}, h_{n} = t_{n+1}-t_n$. We can see that setting $h_n=h_{n-1}$ yields us the fixed step-size form.