Row-sum condition for Runge-Kutta methods

548 Views Asked by At

Consider a general RK-method with weights $\vec{b}$ $(s\times 1)$, nodes $\vec{c}$ $(s\times 1)$ and matrix $\vec{A}$ $(s\times s)$. In the literature, there is a widely repeated minimum condition for consistency called the row-sum condition: $$c_i = \sum_{j=1}^{s}A_{ij}, $$ see e.g. Wikipedia. However, I have not been able to find any proof of this fact. Any ideas?

The odd thing is that there seems to be another, just as fundamental, condition for consistency rarely mentioned, namely that $$\sum_{j=1}^{s}b_{j}=1.$$

Indeed, recall that the RK-method, for the equation $y'=f(t,y)$, is defined by: $$Y_j' = f(t_n+c_jh,Y_j), \ j = 1,...,s$$ $$Y_j = y_n+\sum_{k=1}^{s}A_{ik}hY_k', \ j=1,...,s$$ $$y_{n+1} = y_n + \sum_{j=1}^{s}b_jhY_j'$$

Taking exact data at $t_n$ and expanding to lowest order gives: $$Y_j' = f(t_n+c_jh,Y_j)=f(t_n,y(t_n))+\mathcal{O}(h), \ j = 1,...,s$$ $$Y_j = y(t_n)+\mathcal{O}(h), \ j=1,...,s$$ $$y_{n+1}|_{\mathrm{exact}} = y(t_n) + \sum_{j=1}^{s}b_jhY_j'=y(t_n) + hf(t_n,y(t_n))\sum_{j=1}^{s}b_j + \ \mathcal{O}(h^2)$$ while $$y(t_{n+1}) = y(t_n+h) = y(t_n)+hy'(t_n) + \mathcal{O}(h^2) = y(t_n)+hf(t_n,y(t_n)) + \mathcal{O}(h^2), $$ showing that we must have $$\sum_{j=1}^{s}b_{j}=1$$ for consistency of order 1.

1

There are 1 best solutions below

1
On BEST ANSWER

These are two of the common simplifying assumptions made on deriving Runge Kutta methods developed by John Butcher. For an $s$-stage method, define

\begin{align} B(p) & = \sum_{j=1}^s b_j c_j^{k-1} = \frac{1}{k}, \hspace{4ex}k=1,...,p, \\ C(\eta) & = \sum_{j=1}^s a_{ij} c_j^{k-1} = \frac{c_i^k}{k}, \hspace{4ex}i=1,...,s, \text{ and }k=1,...,\eta. \end{align}

Notice that $B(1)$ and $C(1)$ are exactly the statements you are referring to. For an RK method of order $p$, it is necessary that $B(p)$ holds (see Butcher, Volume III, section 321). This makes satisfying $\sum_{i=1}^s b_i = 1$ a necessary condition to have a method of order $\geq1$. The stage order of a Runge-Kutta method is defined as the largest $n$ such that $B(n)$ and $C(n)$ both hold. This is related to convergence and important in practice, but not strictly necessary for a convergent method. Here is an old paper:

https://epubs.siam.org/doi/pdf/10.1137/0902026

where the row-sum constraint is not satisfied (but $B(1)$ still is).

For details on the involved but quite interesting theory, I would suggest John Butcher Numerical Methods for Ordinary Differential Equations. This review of DIRK methods is also good:

https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20160005923.pdf