$(\beta_n)_{n \geq 0}$ converges uniformly to the solution of $x' = F(t,x)$, variation of Picard iteration?

151 Views Asked by At

This is exercise 2.7. from Differential Equations: A Dynamical Systems Approach to Theory and Practice by Marcelo Viana and José Espinar.

Let $F \colon \mathcal{U} \to \mathbb{R}^n$ be continuous and locally Lipschitz in $x$ in an open subset $\mathcal{U}$ of $\mathbb{R}\times > \mathbb{R}^n$. Let $K$ be a compact subset of $\mathcal{U}$ which is also convex in the second variable, and let $(t_0, x_0)$ be a point in $K$. Let $I \subseteq \mathbb{R}$ be any open interval containing $t_0$ and suppose that there exist $x_n \in \mathbb{R}^n, n \geq 0,$ converging to $x_0$ and curves $\beta_n \colon I \to \mathbb{R}^n, n > \geq 0$ such that $$ (t, \beta_n(t)) \in K \quad \text{and}\quad \beta_{n+1}(t) = x_n + \int_{t_0}^t F(s, \beta_n(s) \,ds $$ for every $t \in I$ and all $n \geq 0$.

(1) Show that there exists $\varepsilon > 0$ such that, restricted to $(t_0 - \varepsilon, t_0 + \varepsilon)$, the sequence $(\beta_n)_n$ converges uniformly to the solution of $x' = F(t,x)$ with initial condition $x(t_0) = x_0.$

(2) Deduce that $(\beta_n)_n$ converges uniformly to the solution of $x' = F(t, x)$ with initial condition $x(t_0) = x_0$ in any compact subinterval.

I am already struggling with the first item. Note that this is Homework, so I am not looking for a full solution (at this time), but only hints.

Picard's Theorem, or the proof of it, should be useful for this. I tried to proceed as in the proof but there are several points where I got stuck. What I have:

If I define the space

$$ Y = \{\gamma\colon (t_0 - \varepsilon, t_0 + \varepsilon) \to K' \text{ continuous} \mid \sup_{t \in (t_0 - \varepsilon, t_0 + \varepsilon)} |\gamma(t) - x_0| \leq \alpha\}, $$ where $K'$ is the projection of $K$ to $\mathbb{R}^n$, I can show that the operator defined as $$ \mathcal{L}(\gamma)(t) = x_0 + \int_{t_0}^t F(s, \gamma(s))\, ds $$ is well defined and a contraction for suitably chosen $\varepsilon$ and $\alpha$. (This is basically just the proof of Picard's Theorem.) The unique fixed point of $\mathcal{L}$ will be the solution of $x' = F(t,x)$. I was also able to show that for $n$ large enough, $\beta_n \in Y$. This means in particular that taking such $\beta_n$ and applying the Picard operator, I get something that converges to the solution uniformly, i.e.,

$$ \mathcal{L}^k(\beta_n) \to \beta $$ as $k \to \infty$, where I denote by $\beta$ the unique solution of the ODE. However, this is not quite the convergence I want since I would like to have $\beta_n \to \beta$. I don't see a way how to salvage this.

I also tried to define a slightly different operator, namely

$$ \mathcal{T}(\gamma)(t) = \gamma(t_0) + \int_{t_0}^t F(s, \gamma(s))\, ds, $$

which would seem to work nicer with the $\beta_n$ since

$$ \mathcal{T}(\beta_n)(t) = \beta_n(t_0) + \int_{t_0}^t F(s, \beta_n(s))\, ds = x_{n-1} + \int_{t_0}^t F(s, \beta_n(s))\, ds, $$ however, this last expression is not quite $\beta_{n+1}(t)$. Further, I can't show (and don't know if it is true) that this is a contraction.

I am looking for either a hint how to continue with one of my approaches, or a hint where else to start. Thanks!

1

There are 1 best solutions below

5
On BEST ANSWER

I think the following works, however I am interested if anyone sees a solution that is simpler than this.

(1) Let $\mathcal{L}$ be the Picard operator, i.e., \begin{align*} \mathcal{L} \colon Y &\to Y, \\ \gamma &\mapsto x_0 + \int_{t_0}^t F(s, \gamma(s)) \, d s, \end{align*} where $Y = \{\gamma \colon (t_0 - \varepsilon, t_0 + \varepsilon) \to \overline{B}_\delta(x_0) \text{ continuous }\mid \gamma(t_0) = x_0\}$ for some $\varepsilon, \delta$ as defined in the proof of Picard's theorem, i.e., chosen such that $\mathcal{L}$ is a well-defined contraction on $Y$.

In particular, these constants are chosen in a way such that $\mathcal{L}$ is a contraction with contraction rate $\lambda \in (0,1)$. Let $\eta > 0$, then there exists $N_\eta \in \mathbb{N}$ such that for all $n \geq N_\eta$ we have that $\|x_0 - x_n\| < \eta$, since $x_n \xrightarrow{n \to \infty} x_0$. Notice that \begin{equation*} \mathcal{L} \beta_n (t) - \beta_{n+1}(t) = x_0 - x_n, \end{equation*} so that in particular $\|\mathcal{L}\beta_n - \beta_{n+1}\| < \eta$. Applying the operator $\mathcal{L}$ twice to $\beta_n$, we get \begin{equation*} \|\mathcal{L}^2\beta_n - \beta_{n+2}\| \leq \|\mathcal{L}^2\beta_{n} - \mathcal{L}\beta_{n+1}\| + \|\mathcal{L}\beta_{n+1} - \beta_{n+2}\| < \lambda \eta + \eta. \end{equation*} Inductively, for any $k$ we can then obtain that \begin{equation*} \|\mathcal{L}^k \beta_n - \beta_{n+k}\| < \eta + \lambda \eta + \lambda^2 \eta + \ldots + \lambda^{k-1}\eta < \eta \sum_{i = 0}^\infty \lambda^i = \underbrace{\frac{1}{1-\lambda}}_{:= C} \eta. \end{equation*} Now, the curve $\beta_n$ is not necessarily in $Y$, but $\mathcal{L}\beta_n \in Y$. Thus we know that applying the Picard operator to $\mathcal{L}\beta_n$ repeatedly yields a sequence that converges to the unique fixed point $\tilde{\beta}$, i.e., to the unique solution of the differential equation. Formally, \begin{equation*} \mathcal{L}^k\beta_n \xrightarrow{k \to \infty} \tilde{\beta}. \end{equation*} Let now $\eta' > 0$ be arbitrary and set $\eta = \frac{1}{2C}\eta'$. Let $N_\eta$ be as above, i.e., for all $n \geq N_\eta$ we have \begin{equation*} \|\mathcal{L}^k\beta_n - \beta_{n+k}\| < C\eta = \frac{\eta'}{2}, \end{equation*} by what we have shown above. Next, we want to choose $k_0$ large enough such that \begin{equation*} \|\mathcal{L}^k\beta_{N_\eta}- \tilde{\beta}\| < \frac{\eta'}{2} \end{equation*} for all $k \geq k_0$. With that, we have \begin{align*} \|\beta_{n+k} - \tilde{\beta}\| &=\Vert \beta_{N_\eta+(k+n-N_\eta)} -\tilde{\beta}\Vert \\ &\leq \|\beta_{N_\eta+(k+n-N_\eta)} - \mathcal{L}^{k+(n-N_\eta)}\beta_{N_\eta}\| + \|\mathcal{L}^{(k+n-N_\eta)}\beta_{N_\eta}- \tilde{\beta}\| \\ &< \eta', \end{align*} which shows the claimed (uniform) convergence locally, i.e., on the interval $(t_0 - \varepsilon, t_0 + \varepsilon)$.

(2) Let $J \subseteq I$ be a compact subinterval containing $t_0$. The idea is to cover $J$ with carefully chosen open intervals. Note that by Picard's theorem, for every $t \in J$ we can find some $\varepsilon$, such that the Picard operator is a contraction for suitable $\varepsilon$ and $\delta$ as above. We claim that we can take an uniform $\varepsilon$ that works for any $t \in J$. Indeed, there is a result (Theorem 2.12. in Viana/Espinar) that guarantees that this is possible. We can cover the compact set $K$ with balls of radius $\delta$ around every point in $K$, where again we obtain $\delta$ as in the proof of Picard's theorem. By compactness, finitely many are enough, so we take the minimum of all $\delta$ and the maximum of all $$M(\delta)= \sup\{\|F(t,x)\| : (t,x) \in \overline{B_\delta(t_0)}\times \overline{B_\delta(x_0)}\}$$ to see that by the result mentioned (Thm 2.12.), \begin{equation*} \varepsilon = \min\left\{\delta, \frac{\delta}{M(\delta)}\right\} \end{equation*} is the uniform $\varepsilon$ we need.

Now let us not consider just any open cover of $J$, but a particular one. We want to cover $J$ with sets of the form $(t_0 + k\frac{\varepsilon}{2}, t_0 + (k+1) \frac{\varepsilon}{2})$, where $k \in \{-M, \ldots, N\}$ is some finite subset of the natural numbers. The cover is illustrated below.

enter image description here

Note that if we write $t_i$ for the center of the interval $(t_0 + i\frac{\varepsilon}{2}, t_0 + (i+1)\frac{\varepsilon}{2})$, then $t_{i + 1} \in (t_i, t_i + \varepsilon)$ and $t_{i - 1} \in (t_i - \varepsilon, t_i)$ by construction.

From the first part, we know that \begin{equation*} \beta_{n+1}(t) = x_n + \int_{t_0}^t F(s, \beta_n(s))\, d s \end{equation*} and $\left.\beta_n\right|_{(t_0 - \varepsilon, t_0 + \varepsilon)} \xrightarrow{n \to \infty} \tilde{\beta}$. Note that as of now we only know that this convergence happens locally, however we already know that the limiting function $\tilde{\beta}$ is defined on all of $J$ by the existence and uniqueness of maximal solutions. Since $t_1 \in (t_0 - \varepsilon, t_0 + \varepsilon)$, we have that $\beta_n(t_1) \to \tilde{\beta}(t_1)$. Further, \begin{equation*} \beta_{n+1}(t) - \beta_{n+1}(t_1) = \int_{t_1}^t F(s, \beta_n(s))\, d s, \end{equation*} so that \begin{equation*} \beta_{n+1}(t) = \beta_{n+1}(t_1) + \int_{t_1}^t F(s, \beta_n(s)) \, d s. \end{equation*} This puts us exactly into the setting of the first part, so that $\beta_{n+1} \xrightarrow{n \to \infty} \tilde{\beta}$ on $(t_1 - \varepsilon, t_1 + \varepsilon)$ as well. Repeating this argument $M + N$ times shows that indeed we have (uniform) convergence on all of $J$.