I have to solve a differential equation of the type: $$ \frac{d y}{dt} = f(t,y) $$ $$ y(t=0) = y_0 $$ In general one would solve it with some high order Runge-Kutta method... if one could compute $f(t,y)$.
However evaluating $f(t,y)$ is extremely difficult in my case and can only be obtained with a significant stochastic error. Let us call $f^*(t,y)$ the estimation of $f(t,y)$. Practically I cannot calculate $f(t,y)$, but only obtain a value with error $f^*(t,y)$. Strictly speaking $f^*(t,y)$ is a stochastic variable with a normal distribution, with mean equal to $f(t,y)$ and known variance.
Before proceeding let me very clear. I do not want to propagate the stochastic ODE: $\frac{d y}{dt} = f^*(t,y) $. I want to solve $\frac{d y}{dt} = f(t,y) $. The stochastic error is only a problem.
The most straightforward time propagation could be done by simply using Euler forward with the obtained value with error: $$ y(t_n+h) = y(t) + h f^*(t,y(t_n)) $$
Apart from the low convergence order of Euler, using $f^*(t_n,y)$ is definitely sub-optimal. The reason is that my time propagation gets the full error contained in $f^*(t,y)$.
But I can already think of a way to reduce it. At the step $n$ I have already calculated several $f^*(t_m,y)$ (with $m<n$) at previous times and I could use those calculated values to reduce my error: for instance I could make some assumption on the smoothness of $f(t,y)$ to obtain a better estimator for $f(t_n,y)$. One idea could be that I can assume some polynomial dependence on t of $f(t,y(t))$ and, from all the $f^*(t_m,y)$ values with $m\le n$, obtain the coefficients by inference. From that I can create an estimator for $f(t,y(t))$ with a reduced variance compared to $f^*(t_n,y)$.
Of course It would be even better if one could put all that together with a higher order Runge-Kutta method.
Does anyone know if a problem like that has been already addressed in literature? I could not find anything.