Why settle for Lagrange Interpolation when doing linear multistep ODE integration?

139 Views Asked by At

Say that we have some initial value problem:

$y'(t) = f(t,y(t)) ; y(0) = y_0$

with $y_0$ and $f(t,y(t))$ known.

If we use Euler's method to numerically approximate the first k points, then we have $y_0, y_1,...y_k$ (our approximations) known.

The other day, my professor derived the Adams-Bashforth method to get an estimate for $y_{k+1}$:

$\int_{t_k}^{t_{k+1}}y'(t)dt = \int_{t_k}^{t_{k+1}} f(t,y(t))dt$

$y(t_{k+1}) - y(t_k) = \int_{t_k}^{t_{k+1}} f(t,y(t))dt$

$y(t_{k+1}) = y(t_k) + \int_{t_k}^{t_{k+1}} f(t,y(t))dt$

Since we have $y_k$ as an estimate for $y(t_k)$, all we need to estimate is the integral. My professor suggests using the Lagrange interpolating polynomial passing through these k+1 points and then integrating that polynomial to get an approximation for the interval, yielding the Adams-Bashforth method. This makes sense.

My question is, since we have information about the derivatives (that is, y'(t) is known) at each point, why don't we use Hermite interpolation (or otherwise put that information to work somehow)? Wouldn't that be more accurate than Lagrange? My professor didn't know, and after a little (well, admittedly, very little) digging, I haven't found a name for that method.

If that method exists, I would love to hear its name. If not, I would love to hear why it isn't used (maybe it's unstable/expensive/inconsistent?).