Suppose standard ODE $y'(t) = f(t, y)$ where $y\in R^1$, $y(t_0) = y_0$.
How to get precise solution $y(t_i)$ for $t_i \in [t_0, \dots, t_{N-1}]$ samples quickly?
Standard methods for solving this are Euler, RK, etc. But, what if the evaluation of $f(t, y)$ is time consuming? Is there a method on GPU, that can accelerate this?
I had an idea to sample $f(t, y)$ on some area $[t_0, t_{N-1}]\times[y_\min, y_\max]$ using GPU threads. (Let's say $f_{ij} = f(t_i, y_j)$ ) which can be done quickly on GPU. Then, we can create integral curve on this $f_{ij}$ grid using some interpolation. Points $[t_i, y_i]$ can be considered as an approximated solution, and we can refine grid around this curve and evaluate $f(t_i, y_p)$ in the neighborhood and refine the solution...
Can this approach work or are there better approaches?
A synopsis of parallel methods for ODEs can be found here. In particular, the Parareal method offers a strategy to integrate an ODE in parallel. Still, you need to perform serial evaluations of $f$ using a coarse scheme.