Scaling forcing function before numerical integration of linear ODE

140 Views Asked by At

Suppose that we have a linear ode which we want to integrate numerically: $$ L\{y\}=f(t) $$ where $L$ is a linear differential operator (and some initial conditions), by using a method of order $\mathcal{O}(h^n)$ -- $h$ being the step size of the method.

Is it true that if we scale up the forcing function $f(t)$ with a factor $K$ before integrating, and then rescale down the solution $y$ by $\frac{1}{K}$ we have increased the precision of the method? The argument being that the order of the method is still the same and only dependent on the step size. So when we scale down the error is reduced.

1

There are 1 best solutions below

0
On BEST ANSWER

Imagine the function $y_s$ is the actual solution to the problem

$$ L\{y_s\} = f(t) \tag{1a} $$

And $\hat{y}$ is an approximation, the error is measured as

$$ \epsilon(n) = |y_s(t_n) - \hat{y}_n| \tag{2a} $$

Now, since $L$ is linear, if you multiply both sides of (1a) by $K$, then

$$ L\{ Ky_s \} = K f(t) \tag{1b} $$

And the approximated solution is $\hat{y}^{K}$ so the error is

$$ \epsilon^K(n) = |K y_s(t_n) - \hat{y}^{K}| = K\left|y_s(t_n) - \frac{\hat{y}^{K}}{K}\right| = K \epsilon(n) \tag{2b} $$

So the error also scales as $K$. That is: if you multiply by $K$ and then divide the result, the accuracy does not increase