How to bound error when approximating ODE

431 Views Asked by At

I have a question regarding how to bound the error, if one changes the "right hand side" of an ODE. For example, the equation of a simple pendulum in polar coordinates is something like $$\ddot{\theta}= k\sin\theta$$

The common adjustment is to let $$\sin\theta \sim\theta$$ for small enough $\theta$.

I'll try to state my question in general terms now. Take the IVP: $$\left\{\begin{align}\dot x = f(x) \\ x(0) = x_0\end{align}\right.$$ with $$f:B_{r}(0)\subset\Bbb R^n\to\Bbb R^n$$

Now, suppose that $\tilde f:B_{\tilde r}(0)\to\Bbb R^n$ is such that $\|f(x)-\tilde f(x)\| \lt \varepsilon$ for $x\in B_{\tilde r}(0)$ if $\tilde r < \delta \leq r$. Under which conditions, and then how, can we place a bound on $$\|\varphi(t) - \tilde\varphi(t)\|$$ where $\varphi$ solves the original IVP, and $\tilde\varphi$ solves the new IVP formed by replacing $f$ with $\tilde f$? Obviously, the conditions must at least guarantee existence and uniqueness for the question to make sense, but what else (if anything)?

I'm not sure if this has to do with perturbation theory proper, so let me know if it's wrongly tagged.

2

There are 2 best solutions below

2
On BEST ANSWER

I'm not an expert and I suggest you take the following answer with a grain of salt. Check the logic for yourself; I could be mistaken.


First assume that the approximated and exact solutions are contained within a certain ball around the origin: $$||\phi (\tau)||, ||\tilde \phi (\tau)|| \leq R \; \;\;\; \forall \tau \in [0,t]$$ Next assume that we can bound the absolute difference between the approximate and exact derivative of $x$ for all values of $x$ in the same ball: $$||f(x) - \tilde f(x)|| \leq \epsilon \quad \forall x \in \bar{B}_{R}(0)$$

Since both of the solutions $\tilde \phi$ and $\phi$ are constrained to this ball for $\tau \in [0,t]$, we know that: $$\forall \tau \in [0,t], \quad ||f(x) - \tilde f(x)|| \leq \epsilon$$


Now consider the difference between the $i$th components of the approximate and exact solutions: $$\begin{align*} \tilde \phi_i (t) - \phi_i (t) &= \int_0^t \left[ \tilde f_i(x(\tau)) - f_i(x(\tau)) \right]d\tau \\ & \leq \int_0^t\left|\tilde f_i(x(\tau)) - f_i(x(\tau)) \right| \, d\tau \\ & \leq \int_0^t\left|\left|\tilde f(x(\tau)) - f(x(\tau)) \right|\right| \, d\tau \\ & \leq \int_0^t \epsilon \, d\tau \\ & \leq \epsilon \cdot t \end{align*}$$ Symmetric reasoning applies to $\phi_i(t) - \tilde \phi_i(t)$ and so we conclude: $$\left| \tilde \phi_i(t) - \phi_i(t) \right| \leq \epsilon \cdot t$$

Calculating the norm of this worst-case scenario difference, we obtain: $$\left|\left| \tilde \phi(t) - \phi(t) \right| \right| \leq \sqrt{N} \epsilon \,t$$ ..where $N$ is such that $\phi \in \mathbb R^N$ (in other words, the dimensionality of the space).


To Summarise:

Suppose we start off with an upper bound on $||\phi(t)||$ and $||\tilde \phi(t)||$:

$$||\phi(\tau)|| \leq r \quad \forall \tau \in [0,t]$$ $$||\tilde \phi(\tau)|| \leq \tilde r \quad \forall \tau \in [0,t]$$

We can now calculate the bound on the absolute difference between the approximate and exact derivative ($\tilde f$ and $f$) in the larger of these two balls:

$$||f(x) - \tilde f(x)|| \leq \epsilon \quad \forall x \in \bar{B}_{\max\{\tilde r,r\}}(0)$$

Using this bound $\epsilon$, we can put an upper bound the error of the approximate solution $\tilde \phi(t)$ as follows:

$$\left|\left| \tilde \phi(t) - \phi(t) \right| \right| \leq \sqrt{N} \epsilon \,t$$

0
On

Considering your direct question

I think it is clear, that there exists such a bound you ask for, but it will not help you understand the differences or find the similarities.

Consider your $\sin \theta$ example for negative $k$. The linear approximation is not chosen because $|\theta_t - \tilde\theta_t|$ will be small, but in order to derive a certain property of the complicated case from the simple one.

An alternative

It makes more sense to consider stationary properties and ask how good they can be approximized. For example the period $T$ of the oscillation. Its calculation can be generalized and the effect of different $f$ studied:

(I will use some concepts of physics, like energy conservation. This is no limitation as they have mathematical counterparts)

Assume we have some kind of potential $V:\mathbb R\mapsto \mathbb R$. Then a particle in this potential would move according to

$$\ddot x=-V'(x).\tag{1}$$

Assume $V(0)=0$ is a local minimum of $V$. If the particle's starts at $x_0=0$ with some initial kinetic energy $V_0>0$ smaller than the next stationary point of $V$, i.e. caught in the potential, then the time it takes to come to a stop at can be calculated in the following way:

$$ T=\int_0^T dt =\int_0^{x(T)}\frac{dx}{\dot x(x)} = \int_0^{x(T)}\frac{dx}{\sqrt{2{V_0-V(x)}}} = \int_0^{V_0} \frac{du}{\sqrt{2(V_0-u)}V'(V^{-1}(u))}=\int_0^1 \frac{du}{\sqrt{2(1-u)}g(u)}, \text{ where } g(u)=\frac{V'(V^{-1}(u V_0))}{\sqrt{V_0}}$$

The third equality used energy conservation ($ \frac 1 2 \dot x^2=V_0-V(x)$, which is a mathematical consequence of (1)) and all other steps were integration by substitution. $V^{-1}$ denotes the inverse of $V$ on $[0,x(T)]$, where it is monotonic by our assumption.

As you can see the $V$ and $V_0$ enter only in the function $g$ and it can tell you alot about the periodic behavior of two different systems. Let's compare it for your two cases:

Example: Harmonic Oscillator

$V(x)= -\frac k{2} x^2$, which yields $\ddot x = -k x$ and

$$ g(u)=\sqrt{k2 u}$$

As expected $g$ and the time $T$ do not depend on $V_0$ (harmonic oscillations have periods independent of the amplitude) and a full period is given by $4T$ which evaulates to $\frac{2\pi}{\sqrt{k}}$.

Example: Mathematical Pendulum

$V(x)=k(1-\cos(x))$, yielding $\ddot x = -k\sin(x)$ and using $\sin(x)=\sqrt{1-\cos^2(x)}$:

$$g(u) = \sqrt{k(2 u-su^2)}, \text{ where } s=1-\cos(x(T))$$

This immediatly tells you that for an ideal pendulum higher amplitudes $x(T)$ lead to longer periods.

It also tells you, that in the limit of very small amplitudes, your $g(u)$ and $T$ will monotonically converge to the harmonic case.