Is the Global Error of the Taylor Series Second Order Method $O(h^2)$?

890 Views Asked by At

To solve an initial value problem $\frac{dx}{dt}=f(t,x(t))$ with $t\in[t_0,t_N]$, $x(t_0)=x_0$ we can use a Taylor series method of second order, with step size $h$:

$$x_{i+1}=x_i+hf(t_i,x_i)+\frac{h^2}{2}[f_t(t_i,x_i)+f(t_i,x_i)f_x(t_i,x_i)]$$

It's clear to me that the local truncation error of the method is $O(h^3)$ just by comparing this with the Taylor expansion of $x_{i+1}$.

However I don't know how I would derive the global error at $t=t_N$? It would make sense that this is $O(h^2)$ since $N=\frac{t_N-t_0}{h}=O(1/h)$ and we have local truncation error $O(h^3)$ giving global error $ =O(1/h) \times O(h^3)=O(h^2)$?

But obviously this isn't a rigorous proof. I guess we could show that the formula is stable and apply Dahlquist's theorem (but the Taylor series method of second order isn't a multistep formula so I don't know if this theorem could be applied)?

Specifically if we look at the initial value problem $\frac{dx}{dt}=x(t)$ with $t\in[0,1]$ and $x(0)=1$, how could we show that the global error is $O(h^2)$?

1

There are 1 best solutions below

0
On BEST ANSWER

This is true for all fixed-step one-step methods. The previous error propagates at worst with the Lipschitz constant so that you get for the difference of exact and numerical solution $e_n=|x_n-x(t_n)|$ the recursion $$ e_{n+1}\le e_n+Le_n+\tau_nh^{p+1} $$ This recursion leads to the bound $$ e_n\le \frac{e^{L(nh)}-1}{L}\max_{0\le k<n}\tau_k·h^p $$


For the exponential you get simply $$ x_{n+1}=(1+h+\tfrac12h^2)x_n $$ so that $$ x_n=(1+h+\tfrac12h^2)^n=e^{n(h+\tfrac12h^2)-\tfrac12(h+\tfrac12h^2)^2+...} \\ = e^{(nh)·(1-\tfrac12h^2+...)} $$ which again gives a relative error $O(h^2)$