Proof that one cannot calculate the value of an ODE at a point without calculating the previous values

58 Views Asked by At

This question may need some rephrasing, so I will give two related questions. Suppose I have some autonomous ODE $\gamma'(t) = f(\gamma(t))$, with $\gamma: \mathbb{R} \to \mathbb{R}^n$, and $f: \mathbb{R}^n \to \mathbb{R}^n$. Traditionally, when we solve these things numerically, we move in small increments. If I want to numerically approximate the value of $\gamma$ at $t_0$, then usually as an intermediate step I must numerically approximate the value of $\gamma$ at a great number of points less than $t_0$.

I could move from $0$ to $t_0$ in one big step, but if $t_0$ is too larger the error will always be terrible. But, there could be some other method of approximating the solution that lets me calculate $\gamma(t_0)$ for large times without approximating any intermediate steps with very low error.

Is there a proof that such a method does not exist?

If this does not make sense, a related question might be to phrase this in terms of recurrence relations. Given some recurrence relation $\gamma_i = f(\gamma_{i+1})$ under which conditions can I give an algorithm to compute $\gamma_{n}$ without computing its value for any smaller $n$?

1

There are 1 best solutions below

3
On

Numerical methods depend on function evaluations at previous points to predict a solution at the given point.

There are some adaptive methods that predict errors at the next step with a given step size and modify the step size accordingly.

Therefore you do not have to use a very small step size where the predicted error is not too large.

Can we avoid the middle steps at all? Only if you have an analytic solution and you don't have to depend on Numerical Methods.