Error bounds for non-autonomous systems with respect to input

156 Views Asked by At

The error bounds for an ordinary differential equation: \begin{equation} \dot{x}(t) = f(x(t)) \end{equation} with respect to initial conditions $x(t_0) = x_0$, $\hat{x}(t_0)=\hat{x}_0$ \begin{equation} \left|\left|\hat{x}(t)-x(t)\right|\right| \leqslant e^{L\left|t-t_0\right|}\left|\left|\hat{x}(0)-x(0)\right|\right| \end{equation} where L is a Lipschitz constant of $f$ with respect to $x$. For a proof, see e.g. Stoer, Bulirsch.

I have read in an article [Arnold] that for a system: \begin{equation} \dot{x}(t) = f(x(t), u(t)) \end{equation} hold a similar bounds with respect to different input signals $u(t)$ and $\tilde{u}(t)$: \begin{equation} \left|\left|\tilde{x}(t)-x(t)\right|\right| \leqslant C\left(e^{L\left|t-t_0\right|}-1\right)\max_{s\in[t_0,t]}\left|\left|\tilde{u}(s)-u(s)\right|\right| \end{equation} where L is a Lipschitz constant of $f$ with respect to $x$. Initial conditions are assumed to be equal $\tilde{x}(0) = x(0) = x_0$. An input signal $\tilde{u}(t)$ is assumed to be a polynomial approximation of $u(t)$.

I would like to ask for a help with proving this statement. I went through the following steps: \begin{align} &x(t) = x_0 + \int^t_{t_0}f(x(s), u(s)) \mathrm{ds}\\ &\tilde{x}(t) = x_0 + \int^t_{t_0}f(\tilde{x}(s), \tilde{u}(s)) \mathrm{ds}\\ &\tilde{x}(t) - x(t) = \int^t_{t_0} \left[f(\tilde{x}(s), \tilde{u}(s)) - f(x(s), u(s)) \right] \mathrm{ds}\\ &\left|\left|\tilde{x}(t) - x(t)\right|\right| \leqslant \int^t_{t_0} \left|\left|f(\tilde{x}(s), \tilde{u}(s)) - f(x(s), u(s)) \right|\right| \mathrm{ds} \end{align} I get stuck at this step. I am not sure whether I am supposed to use Lipschitz condition: \begin{equation} \left|\left|f(\tilde{x}(t), U(t)) - f(x(t), U(t)) \right|\right| \leqslant L\left|\left|\tilde{x}(t) - x(t)\right|\right| \end{equation} and whether this is a correct formulation of Lipschitz condition with respect to x or whether I need to somehow use the fact about polynomial approximation. Any assistance is appreciated.

Edit: Additional uniform Lipschitz condition of $f$ with respect to both arguments can be assumed.

1

There are 1 best solutions below

3
On BEST ANSWER

You can use Gronwall's inequality, we need only the following corollary,

Let $L$ and $U$ be non-negative real numbers. Suppose that$\ \ f: [t_0,t_1] \rightarrow \mathbb{R}$ is a continuous function satisfying $$f(t) \leq \int_{t_0}^{t} Lf(t)+M \ \,dt $$ Then $$f(t) \leq \frac{L}{M} \left(\exp(t-t_0) -1\right)$$

proof.

From Grownwall's inequality:

$$\begin{align} f(t) & \leq M(t-t_0) + \int_{t_0}^{t} ML(s-a)\exp\left(\int_{s}^{t}L\,dr\right) \, ds\\ & = M(t-t_0) + \int_{a}^{t} ML(s-t_0)\exp\left(M(t-s)\right) \, ds \\ & = M(t-t_0) + \left[-M(s-t_0)\exp(L(t-s))\right]_{t_0}^{t} + \int_{t_0}^{t} M\exp(L(t-s)) \,dt \\ &= \frac{M}{L} \left(\exp(t-t_0) -1\right)\ \end{align} $$ Now we prove

Let $f \in C(\mathbb{R}^n\times \mathbb{R}^m, \mathbb{R})$ be lipschitz in the first variable with constant $L$, and in the second variable by constant $M$. If $x_i:[t_0,t_1] \rightarrow \mathbb{R}^n$ satisfy, $$\begin{cases} x_i'(t) = f(x_i(t),u_i(t)\\ x_i(t) = x_0\\ \end{cases}$$ Then there is a constant depending only on $L$ and $M$ s.t, $$\|x_1(T)-x_2(T)\| \leq C\left( e^{L(T-t_0)}-1 \right) \max_{[t_0,T]} \|u_2(t) - u_1(t) \| $$ for all $T \in [t_0,t_1]$

proof.

Let $T \in [t_0,t_1]$. Note that for all $t \leq T$,

$$\|x_1(t) - x_2(t)\| \leq \int_{t_0}^{t} \|f(x_1(t),u_1(t)) - f(x_2(t),u_2(t))\| \,dt$$

By Lipschitz continuity:

$$\begin{align} \|x_1(t) - x_2(t) \| & \leq \int_{t_0}^{t} L\|x_1(t) - x_2(t)\| + \|f(x_2(t),u_1(t)) - f(x_2(t),u_2(t)) \| \,dt \\ & \leq \int_{t_0}^{t} L\|x_1(t) - x_2(t)\| + M\max_{[t_0,T]} \|u_2(t) - u_1(t) \| \,dt \\ \end{align}$$

Hence by Gronwall's inequality (take $f(t) = \|x_2(t) -x_1(t) \|$ and replact $M$ in the corollary above by $M\max_{[t_0,T]} \|u_2(t) - u_1(t) \|$):

$$ \|x_1(T)-x_2(T)\| \leq \frac{M}{L} \left( e^{L(T-t_0)}-1 \right) \max_{[t_0,T]} \|u_2(t) - u_1(t) \| $$