Convergence of the Picard iteration with an incorrect initial guess

522 Views Asked by At

Consider the system of differential equations $\boldsymbol{y'} = f(t,\boldsymbol{y}(t))$ subject to the initial condition $\boldsymbol{y}(t_0) = \boldsymbol{y_0}$. The Picard–Lindelöf theorem guarantees the existence and uniqueness of a solution of this system under the assumption $f \colon \mathbf{R}^{n+1} \to \mathbf{R}^n$ is uniformly Lipschiz in $\boldsymbol{y}$, i.e., $||f(\boldsymbol{y}) - f(\boldsymbol{z})|| \leqslant K||\boldsymbol{y}-\boldsymbol{z}||$ for some $K \geqslant 0.$ The proof involves analyzing the convergence properties of a sequence $(\phi_j)_{j \in \mathbf{N}}$ of approximate solutions, called the sequence of successive approximations. This sequence is obtained by translating the system of DEs into a system of integral equations. Integrating both sides of $\boldsymbol{y'} = f(t,\boldsymbol{y}(t))$, we get

$$\boldsymbol{y}(t) = \boldsymbol{y_0} + \int_{t_0}^t f(s,\boldsymbol{y}(s)) \text{d} s.$$

$\boldsymbol{\phi}(t)$ is the (unique) solution of the system of differential equations if and only if it solves the above integral equation. Let $\boldsymbol{\phi_0}(t) = \boldsymbol{y_0}.$ The sequence of successive approximations

$$\boldsymbol{\phi}_j(t) = \boldsymbol{y_0} + \int_{t_0}^t f(s,\boldsymbol{\phi}_{j-1}(s)) \text{d} s $$

can be shown to converge uniformly to $\boldsymbol{\phi}(t)$ under the assumption $f$ is Lipschitz.

For example, in the case of the (scalar-valued) system $y' = y$ subject to $y(0) = 1$, a straightforward computation shows

\begin{align*} \phi_0(t) &= 1 \\ \phi_1(t) &= 1 + \int_0^t \text{d} s = 1 + t \\ \phi_2(t) &= 1 + \int_0^t (1+s) \text{d} s = 1 + t + \frac{t^2}{2!} \\ \vdots \\ \phi_N(t) &= 1 + \int_0^t \bigg( 1+s+\frac{s^2}{2!}+\cdots+\frac{s^{N-1}}{(N-1)!} \bigg) \text{d} s = 1 + t + \frac{t^2}{2!} + \cdots + \frac{t^N}{N!} \longrightarrow e^t, \\[-6pt] \end{align*}

as $N \to \infty$.

In an exercise on a recent homework, I analyzed convergence of the sequence of successive approximations with initial guess $\phi_0(t) = \cos t$? One can show (through a series of serpentine calculations) that $\phi_j \to e^t$ in this case.

That the iteration with an incorrect guess converges (to the correct solution of the system!) surprised me and other students in the class. Since working on this problem, I have begun to wonder: if $\phi_0$ is an incorrect initial guess, when does the sequence of successive approximations converge to the true solution? What assumptions on $f$ and $\phi_0$ are needed for this to be the case in general? In the case of the $y' = y$ system with the given IC:

  • If $\phi_0(t) = 0$ almost everywhere (in the sense of the Lebesgue measure), $\phi_j(t) \to e^t.$ To see this, note that the method of successive approximations "self-corrects" in the first iteration:

\begin{align*} \phi_1(t) &= 1 + \int_0^t \phi_0(s) \text{d} s = 1 \\[3pt] \end{align*}

  • If $\phi_0 = a_0 + a_1 t + \cdots + a_n t^n$ is a polynomial, the sequence of successive approximations converges to $e^t.$ This follows from a straightforward calculation:

\begin{align*} \phi_1(t) &= 1 + \int_0^t \sum_{j=0}^n a_j s^j \text{d} s = 1 + \sum_{j=0}^n a_j \frac{t^{j+1}}{(j+1)} \\ \phi_2(t) &= 1 + \int_0^t \bigg(1 + \sum_{j=0}^n a_j \frac{s^{j+1}}{(j+1)} \bigg) \text{d} s = 1 + t + \sum_{j=0}^n a_j \frac{t^{j+2}}{(j+2)(j+1)} \\ \vdots \\ \phi_N(t) &= 1 + t + \frac{t^2}{2!} + \cdots + \frac{t^{N-1}}{(N-1)!} + \sum_{j=0}^n a_j \frac{t^{j+N}}{(j+N)(j+N-1)\cdots(j+2)(j+1)}, \\[-12pt] \end{align*}

For a fixed $t$, the denominator in the rightmost term is dominant in the limit as $N \to \infty$. An extra factor of $t$ is being added to the numerator, while terms which are becoming larger and larger are being multiplied together in the denominator. Thus,

$$\lim_{N \to \infty} \phi_N(t) = e^t + \sum_{j=0}^n a_j \underbrace{\lim_{N \to \infty} \frac{t^{j+N}}{(j+N)(j+N-1)\cdots(j+2)(j+1)}}_{\text{$=0$}} = e^t.$$

I would bet that a sufficient condition for convergence is that $\phi_0$ is real analytic on the domain of integration, but I am having some trouble in justifying an interchange of limits in the proof.

If $\phi_0 = \sum_{n=1}^\infty a_n t^n$, we have

\begin{align*} \phi_1(t) &= 1 + \int_0^t \sum_{n=1}^\infty a_n s^n \text{d} s \\ &= 1 + \sum_{n=1}^\infty a_n \int_0^t s^n \text{d} s \\ &= 1 + \sum_{n=1}^\infty a_n \frac{t^{n+1}}{n+1} \\ \phi_2(t) &= 1 + \int_0^t \bigg( 1 + \sum_{n=1}^\infty a_n \frac{t^{n+1}}{n+1} \bigg) \text{d} s \\ &= 1 + t + \sum_{n=1}^\infty a_n \frac{t^{n+2}}{(n+1)(n+2)} \\ &\vdots \\ \phi_{N}(t) &= 1 + t + \frac{t^2}{2!} + \cdots + \frac{t^{N-1}}{(N-1)!} + \sum_{n=1}^\infty a_{n} p_{N,n}(t) \\[-12pt] \end{align*}

where $p_{N,n}(t) = t^{n+N}/(n+N)\cdots(n+1)$ is the degree $n+N$ polynomial obtained in the $N$th iteration. (Note: the uniform convergence of Taylor series is used to interchange integration and summation at every iteration.) As $N \to \infty$, larger and larger factors will be added to the denominator, while a factor of $t$ will be added to the numerator. If we are allowed to interchange the the limit as $N \to \infty$ and the summation, we should get $\lim_{N \to \infty} \phi_N(t) = e^t$.

(Upon re-writing the summation over $n$ as an integral with respect to the counting measure $\mu$, one can use the Monotone Convergence Theorem to interchange limiting operations. If

$$ a_{n} p_{N,n}(t) = a_n \frac{n! t^{n+N}}{(n+N)!} \downarrow 0$$

as $N \to \infty$, the MCT implies

$$\lim_N \sum_{n=1}^\infty a_{n} p_{N,n}(t) = \lim_N \int a_{n} p_{N,n} \text{d} \mu = \int \lim_N a_{n} p_{N,n} \text{d} \mu = 0.$$

I think the convergence is monotone, but do not know sure to show this for $t > 1$.)

I am interested in (1) verifying if the above limits can be interchanged and (2) if we can use, e.g., the density of polynomials in the space of continuous functions (with the $\sup$ norm) to prove $\phi_j \to e^t$ for all $\phi_0$ which are sufficiently continuous (or integrable). If not, are there necessary and/or sufficient conditions for the Picard iteration to converge with the wrong initial guess?

1

There are 1 best solutions below

2
On

What is happening in your particular situation with $y'=y$ is simple. If $\phi_0$ is bounded by some $M$ then $\left | \phi_n(t)-\sum_{k=0}^{n-1} \frac{t^k}{k!} \right | \leq M \frac{|t|^n}{n!}$ which goes to zero as $n \to \infty$ for any fixed $t$. The proof of this is based on the observation that

$$\phi_n(t)=\sum_{k=0}^{n-1} \frac{t^n}{n!} + \int_0^t \int_0^{t_1} \dots \int_0^{t_{n-1}} \phi_0(t_n) dt_n dt_{n-1} \dots dt_1$$

which you can easily prove by induction. Then just apply the triangle inequality after subtracting off the truncated exponential to get the desired result.

If $\phi_0$ is not bounded (but is defined and continuous on the entire line) then you can repeat this argument on any finite subinterval instead.

This is very particular to this case, though. I wouldn't expect this type of estimation to work out in general.

The way the fixed point theorem is actually applied in this situation, you argue that since $f$ is Lipschitz in $y$ with constant $L$, the iteration operator itself is Lipschitz with constant $LT$ where $T$ is the length of the time interval under consideration. Then the fixed point theorem applies as long as $T<L^{-1}$, so you can run the iteration up to there. Then mathematically speaking you can just "restart" the process at a new $t_0$ and propagate from there.