I'm working with the Modified Euler method sometimes called Heun's method or explicit trapezoidal method. I have a book on ordinary differential equations numerical analysis that claims:
The effect of rounding error on the accuracy of the numerical solution is very similar to the error of the numerical differentiation formulas: the truncation error decreases with h but the rounding error increases and there exist an optimal value for which the sum of both errors is minimum.
This optimal value for $h$ is very little (for example for Euler's method is $\sqrt\mu$ where $\mu$ is the accuracy of the machine) and so it's computationally expensive.
I have an example of what he is talking about with numerical differentiation formulas so for example take the differentiation formula:
$$f'(x_0) \simeq \frac{f(x_0+h)-f(x_0)}{h}~\text{ with error }~-h \frac{f''(\theta)}{2}$$
then I write $e(x)$ the rounding error on point $x$ and so the total error made while aproximating $f'(x_0)$ is $$ \frac{e(x_0+h)-e(x_0)}{h} - h \frac{f''(\theta)}{2}. $$ Assuming that rounding errors are bounded by $\epsilon$ and that $f''$ is bounded by M in $[x_0,x_0+h]$ then the total error verifies: $$ \left|f'(x_0)-\frac{f_1(x_0+h)-f_1(x_0)}{h}\right| \leq 2 \frac{\epsilon}{h} + \frac{h}{2}M $$ where $f_1$ is the approximation of $f$ with rounding error. So, when $h$ decreases the error of the formula (truncation error) decreases but rounding error increases.
My question is very simple, how can I get a similar situation for the trapezoidal method?
In the same way. If $h=1/N$, then each of the $N$ steps contributes some numerical error $ϵ$ additional to the local truncation error $C·h^3$. In the global error this sums, simplified, to $$ N·(ϵ+C·h^3)=\frac{ϵ}h+C·h^2 $$ Under favorable conditions, $ϵ\sim\mu$ and $C\sim L^3$ (of the same scale), $L$ the Lipschitz constant, so that the optimal step size is found around $Lh=\sqrt[3]\mu$.
See https://math.stackexchange.com/a/1239002/115115 for a practical example that shows that these scale considerations have a good practical relevance. With modified step sizes this gives
where the tables for Heun and RK4 clearly show the crossover to the dominance of the floating point errors at the expected step sizes $\sqrt[3]\mu\approx 10^{-5}$ and $\sqrt[5]\mu\approx 10^{-3}$ (the last Euler instance ran 30min, I did not want to wait 5h for the next instance with $10^{10}$ steps).