Showing Runge-Kutta implicit method local truncation error

147 Views Asked by At

Consider the implicit Runge-Kutta method: \begin{equation*} y_{n+1} = y_n + hf\left(t_n + \frac{2}{3}h, \frac{1}{3}(y_n + 2y_{n+1})\right) \end{equation*}

a) Show that the local truncation error is $O(h^2)$.

b) Show that the method is A-stable.

So far this is what I have done:

I know we should use the local truncation formula

\begin{equation*} \tau(x,h)=\frac{y(x+h)-y(x)}{h}- \phi(x,y(x),h) \end{equation*}

After using Taylor series expansion, I arrive at

\begin{equation*} \phi (x, y(x); h) = f(t, \frac{y}{3}) + \frac{2}{3}hf_x(t, \frac{y}{3}) + \frac{2}{3}(y(t)+hy'(\eta_t))f_y(t, \frac{y}{3}) + O(h^2) \end{equation*}

Then, again by Taylor series expansion

\begin{equation*} \tau(x,h)=y'(t)+\frac{h}{2}y''(\xi_t) - (f(t, \frac{y}{3}) + \frac{2}{3}hf_x(t, \frac{y}{3}) + \frac{2}{3}(y(t)+hy'(\eta_t))f_y(t, \frac{y}{3}) + O(h^2)) \end{equation*}

Am I going in the right direction? I don't know what else I can do from here, and I would also like to know if I should use one more term in Taylor's expansions.

2

There are 2 best solutions below

0
On

The first step in deriving a Runge-Kutta method is to determine values for $ a_1 $, $\alpha_1 $, and $ \beta_1 $ with the property that $ a_1f(t + \alpha_1, y + \beta_1) $ approximates

$$ T^{(2)}(t, y) = f(t, y) + \frac{1}{2}f'(t, y), $$

with error no greater than $O(h^2)$, which is the same as the order of the local truncation error for the Taylor method of order two. Since $$ f'(t, y) = \frac{df}{dt}(t,y) = \frac{\partial f}{\partial t}(t, y) + \frac{\partial f}{\partial y}(t, y) \cdot y'(t) \quad \text{and} \quad y'(t)=f(t, y), $$ we have $$ T^{(2)}(t, y) = f(t, y) + \frac{h}{2} \frac{\partial f}{\partial t}(t, y) + \frac{h}{2} \frac{\partial f}{\partial y}(t, y) \cdot f(t, y). \quad (5.18) $$

Expanding $f(t + \alpha_1, y + \beta_1)$ in its Taylor polynomial of degree one about $(t, y)$ gives $$ a_1f(t + \alpha_1, y + \beta_1) = a_1f(t, y) + a_1\alpha_1 \frac{\partial f}{\partial t}(t, y) + a_1\beta_1 \frac{\partial f}{\partial y}(t, y) + a_1 R_1(t + \alpha_1, y + \beta_1). \quad (5.19) $$

where $$ R_1(t + \alpha_1, y + \beta_1) = \frac{\alpha_1^2}{2} \frac{\partial^2 f}{\partial t^2}(t, \mu) + \alpha_1\beta_1 \frac{\partial^2 f}{\partial t \partial y}(t, \mu) + \frac{\beta_1^2}{2} \frac{\partial^2 f}{\partial y^2}(\mu, y), \quad (5.20) $$

for some $\zeta$ between $t$ and $t + \alpha_1$ and $\mu$ between $y$ and $y + \beta_1$.

Matching the coefficients of $f$ and its derivatives in Eqs. (5.18) and (5.19) gives the three equations $$ f(t,y)\;:\;a_1 = 1; \quad \frac{\partial f}{\partial t}(t,y)\;:\;a_1\alpha_1 = \frac{h}{2}; \quad \frac{\partial f}{\partial y}\;:\;a_1\beta_1 = \frac{h}{2} f(t, y). $$

The parameters $a_1, \alpha_1,$ and $\beta_1$ are therefore $$ a_1 = 1, \quad \alpha_1 = \frac{h}{2}, \quad \text{and} \quad \beta_1 = \frac{h}{2} f(t, y), $$

so $$ T^{(2)}(t, y) = f \left( t + \frac{h}{2}, y + \frac{h}{2} f(t, y) \right) - R_1 \left( t + \frac{h}{2}, y + \frac{h}{2} f(t, y) \right), $$

and from Eq. (5.20), $$ R_1 \left( t + \frac{h}{2}, y + \frac{h}{2} f(t, y) \right) = \frac{h^2}{8} \frac{\partial^2 f}{\partial t^2}(\zeta, \mu) + \frac{h^2}{4}f(t,y) \frac{\partial^2 f}{\partial t \partial y}(\zeta, \mu) + \frac{h^2}{8} (f(t,y))^2\frac{\partial^2 f}{\partial y^2}(\zeta, y). $$

If all the second-order partial derivatives of $f$ are bounded, then $$ R_1 \left( t + \frac{h}{2}, y + \frac{h}{2} f(t, y) \right) $$ is $O(h^2)$. As a consequence:

  • The order of error for this new method is the same as that of the Taylor method of order two.

  • The difference-equation method resulting from replacing $T^{(2)}(t, y)$ in Taylor's method of order two by $f(t + (\frac{h}{2}), y + (\frac{h}{2}) f(t, y)$ is a specific Runge-Kutta method known as the Midpoint method.

Midpoint Method:

$$ w_{0} = \alpha, \quad w_{i+1} = w_i + h f\left(t_i + \frac{h}{2}, w_i + \frac{h}{2} f(t_i, w_i)\right) \quad \text{for } i = 0, 1, \ldots, N - 1. $$

Only three parameters are present in $a_1 f(t + \alpha_1, y + \beta_1)$, and all are needed in the match of $T^{(2)}$. So a more complicated form is required to satisfy the conditions for any of the higher-order Taylor methods.

The most appropriate four-parameter form for approximating $$ T^{(3)}(t, y) = f(t, y) + \frac{h}{2} f'(t, y) + \frac{h^2}{6} f''(t, y) $$ is $$ a_1 f(t, y) + a_2 f(t + \alpha_2, y + \delta_2 f(t, y)), $$ and even with this, there is insufficient flexibility to match the term $$ \frac{h^2}{6} \bigg[\frac{\partial f}{\partial y}(t, y)\bigg]^2 f(t, y), $$ resulting from the expansion of $\frac{h^2}{6}f''(t, y)$. Consequently, the best that can be obtained from using (5.21) are methods with $O(h^2)$ local truncation error.

0
On

For the determination of the order of the method it is sufficient to consider autonomous DE. RK methods are the same for systems as for scalar equations, and any system can be made autonomous.

A decomposition of the method

Next consider what and how good the midpoint $z=\frac13(y_n+2y_{n+1})$ approximates $$ 2y_{n+1}+y_n=3y_n+2hf(z)\\ z=y_n+\frac{2h}3f(z).\tag{A1} $$ This is the formula for the implicit Euler method. The remainder of the step can be written as $$ y_{n+1}=y_n+hf(z)=z+\frac{h}3f(z),\tag{B1} $$ which is an Euler-forward step.

Let's consider the slightly more general situation of an implicit step of size $k$ followed by an explicit step of size $\ell$, so that $k+\ell=h$

Analysis of the implicit step

For an exact solution that goes through $(t_n,y_n)$ we know that expanded at $y(t_n+k)=z+w$ to get finally only derivatives in $z$ \begin{align} y(t_n)&=y((t_n+k)-k)\\&=(z+w) \begin{aligned}[t]&-kf(z+w)+\frac{k^2}2f^{[1]}(z+w)\\&-\frac{k^3}{6}f^{[2]}(z+w)+...\end{aligned}\tag{A2} \end{align} where $y^{(k+1)}(t)=f^{[k]}(y(t))$ is the basis for the numerical method, the time derivatives of an exact solution can be expressed in $f$ and its space derivatives. The first such expressions are $$ f^{[1]}(z)=f'(z)f(z), ~~ f^{[2]}(z)=f''(z)[f(z),f(z)]+f'(z)^2f(z). $$

Use equation (A2) as recursion formula for $w$, the difference of the midpoint between numerical value and exact solution. $$ w-kf'(z)w=k(f(z+w)-f(z)-f'(z)w)-\frac{k^2}2f^{[1]}(z+w)+\frac{k^3}{6}f^{[2]}(z+w)+... $$ All terms on the right are at least second order in $k\sim h$, so that also $w$ is second order. Use that to determine and leave out terms of 4th order or higher \begin{align} (I-kf'(z))w &=-\frac{k^2}2f^{[1]}(z)+\frac{k^3}{6}f^{[2]}(z)+O(k^4)\\ w&=-\frac{k^2}2f^{[1]}(z)-\frac{k^3}2f'(z)f^{[1]}(z)+\frac{k^3}{6}f^{[2]}(z)+O(k^4) \end{align}

Analysis of the explicit step

Taking the same Taylor expansion a in (A2) in forward direction gives \begin{align} y(t_{n+1})&=y((t_n+k)+\ell)\\ &=(z+w)\begin{aligned}[t] &+\ell f(z+w)+\frac{\ell^2}2f^{[1]}(z+w)\\ &+\frac{\ell^3}{6}f^{[2]}(z+w)+... \end{aligned}\tag{B2} \\ &=z+\ell f(z) \begin{aligned}[t] &+w+\frac{\ell^2}2f^{[1]}(z)\\ &+\ell f'(z)w + \frac{\ell^3}{6}f^{[2]}(z)+O(h^4) \end{aligned} \end{align}

Now insert or eliminate $w$ \begin{align} y(t_{n+1}) &=z+\ell f(z) \begin{aligned}[t] &+\frac{h(\ell-k)}2f^{[1]}(z)-\frac{hk^2}{2} f'(z)f^{[1]}(z) \\ &+ \frac{h(k^2+k\ell+\ell^2)}{6}f^{[2]}(z)+O(h^4) \end{aligned} \\ &=y(t_n)+h f(z) \begin{aligned}[t] &+\frac{h(\ell-k)}2f'(z)f(z)\\ &+\frac{h(\ell-k)(h+k)}{6} f'(z)^2f(z) \\ &+ \frac{h(k^2+k\ell+\ell^2)}{6}f''(z)[f(z),f(z)]+O(h^4) \end{aligned} \end{align}

Conclusion on the order

One can now directly read off \begin{align} \tau_n&=\frac{y(t_{n+1})-y(t_n)}h-f(z) \\ &=\frac{\ell-k}2f'(z)f(z) +\frac{(\ell-k)(h+k)}{6} f'(z)^2f(z) \\ &~~~~+ \frac{k^2+k\ell+\ell^2}{6}f''(z)[f(z),f(z)]+O(h^3). \end{align} Only for $k=\ell=\frac h2$ does the first-order term vanish. Then also the first term of second order goes away. But even then the second term of second order has no other term containing the second derivative of $f$ to compensate against, giving order 2 to the implicit midpoint method, as is well known.

For the given $k=\frac23h$, the method has order one. The task seems to use the other definition for the truncation error. Both are commonly used, so one always has to check back which one is in actual use.

Stability

Apply the method to the test equation $\dot y=f(y)=\lambda y$ to get $y_n=(1-\lambda k)z$, $y_{n+1}=(1+\lambda \ell)z$, so you have to find out where in teh complex plane for $\lambda$ $$ |1+\lambda \ell|\le |1-\lambda k|\\ Re(\lambda)<(k-\ell)|\lambda|^2 $$