Let's assume, I have an autonomous ODE of the form $$ \frac{dy}{dt} = \sum \limits_{i=1}^n \alpha_i\cdot f_i(y) $$ which I want to solve numerically in $t\in[0, t_{end}]$ with some initial condition $y(0)=y_0$. I already know, that the system is well behaved (bounded, non-stiff, stable, etc) and so can use straight-forward ODE45-like solvers and not care about time-discretization errors.
My problem is, that $f(y)$ is not correct but only an approximation, as I have to approximate the values for $\alpha_i$ in the ODE. So the "real solution" denoted by $\tilde{y}$ is given by with $$\frac{d\tilde{y}}{dt} = \sum \limits_{i=1}^n (\alpha_i+\epsilon_i)\cdot f_i(\tilde{y})$$ and $\tilde{y}(0) = y_0$.
Can I give a bound on $|\tilde{y}(t_{end})-y(t_{end})|$ depending on the magnitude of $\epsilon_i$?
I already tried reusing bounds derived for errors caused by time-steps but to no avail.