Currently, we are using the backward Euler (or implicit Euler) method for the solution of stiff ordinary differential equations during scientific computing.
Assuming a quite performant computer hardware and an identical step size which is smaller than 100us. Are there other stable integration methods that are able to compute y(n+1) in just one time step (real-time) and have lower truncation errors? What are their pros and cons?
I would like to implement the most promising ones and benchmark their results.
External references:
If you are currently using backwards Euler for a linear ODE $\dot y=Ay+b$, then you are solving in every step $$ \Bigl(I-hA(t_n+h)\Bigr)\,y_{n+1} = y_n+hb(t_n+h) $$ If $A$ is constant, that is one matrix factorization at initialization and the corresponding backwards substitutions per step.
Using the implicit trapezoidal formula $$ y_{n+1}=y_n+\frac h2(f(t_n,y_n)+f(t_n+h,y_{n+1})) $$ requires to solve for the linear equation the system $$ \left(I-\frac h2 A(t_n+h)\right)y_{n+1}=\left(I+\frac h2A(t_n)\right)y_n+\frac h2\left(b(t_n)+b(t_n+h)\right) $$ Compared to the method before, this has one additional matrix-vector multiplication, which is not that much effort, especially if $A$ is sparse. And a bit more organization and storage, which is a concern at coding time and does not materially influence the run-time.
(3/27/2021) The third order DiagIRK method that can be found in the cited slides of Butcher (IRK.pdf, slide 25) combines 3 stages where each is essentially equivalent to an implicit Euler step. The difference in orders means that, approximately, where implicit Euler needs 1000 steps of step size $10^{-3}$ to get an error $10^{-3}$, the DiagIRK method would only need step size $0.1$ with 10 steps amounting to 30 Euler-equivalent computations. If one reduces the targeted error, the difference becomes even more pronounced, to get to $10^{-6}$, it is $10^6$ Euler steps vs. 300 Euler-equivalent steps in DiagIRK.