An error bound for Euler's method with $Y'(x) = f(x,Y(x))$ when $\frac{\partial f}{\partial y} \leq 0$

52 Views Asked by At

My question comes from section 6.2 of An Introduction to Numerical Analysis (2nd ed) by Kendall Atkinson. Specifically, there is a detail in the proof of the corollary to Theorem 6.3 that I do not understand. Here is the relevant corollary (paraphrased):

Corollary. Let $f:[x_0,b] \times \mathbb{R} \to \mathbb{R}$ ($-\infty < x_0 < b < \infty$) be a function satisfying the following conditions:

  1. $f$ is continuously differentiable,
  2. there exists $K > 0$ such that $|f(x,y_1) - f(x,y_2)| \leq K|y_1-y_2|$ for all $x \in [x_0,b]$ and all $y_1,y_2 \in \mathbb{R}$,
  3. $\partial f/\partial y \leq 0$.

Let $Y:[x_0,b] \to \mathbb{R}$ be the (unique) solution of the initial value problem \begin{cases} Y'(x) = f(x,Y(x)), & x \in [x_0,b] \\[1pt] Y(x_0) = Y_0. \end{cases}

Let $N(h) := \lfloor \frac{b-x_0}{h} \rfloor$ be the number of iterates to be computed, and let $y_0$ be a chosen initial iterate. For $n = 0,1,\ldots,N(h)$, let $x_n := x_0 + nh$ and let $y_{n+1}:= y_n + hf(x_n,y_n)$. Then for all sufficiently small $h$ we have \begin{align*} |Y(x_n) - y_n| \leq |e_0| + \frac{h}{2}(x_n - x_0) \cdot \max_{x \in [x_0,b]} |Y''(x)| \end{align*} for $0 \leq n \leq N(h)$.

Here's my paraphrasing of the proof.

Proof. For each $n \geq 0$ let $e_n := Y(x_n) - y_n$ be the error at the $n$-th iterate. Since $f \in C^1([x_0,b] \times \mathbb{R})$, it follows (by this post) that $Y \in C^2([x_0,b])$. Then by Taylor's Theorem, \begin{align} Y(x_{n+1}) &= Y(x_n) + hY'(x_n) + \frac{h^2}{2}Y''(\xi_n) \\[3pt] &= Y(x_n) + hf(x_n,Y(x_n)) + \frac{h^2}{2}Y''(\xi_n) \end{align} for some $\xi_n \in (x_n,x_{n+1})$. Therefore, \begin{align*} e_{n+1} &= Y(x_{n+1}) - y_{n+1} \\[3pt] &= \left[Y(x_n) + hf(x_n,Y(x_n)) \right] - \left[y_n + hf(x_n,y_n) \right] \\[3pt] &= e_n + h \left[f(x_n,Y(x_n)) - f(x_n,y_n) \right] + \frac{h^2}{2} Y''(\xi_n). \end{align*}

It follows by the Mean Value Theorem that \begin{align*} e_{n+1} &= e_n + h \frac{\partial f}{\partial y}(x_n,\zeta_n)(Y(x_n) - y_n) + \frac{h^2}{2} Y''(\xi_n) \\[5pt] &= \left[1 + h \frac{\partial f}{\partial y}(x_n,\zeta_n) \right]e_n + \frac{h^2}{2} Y''(\xi_n) \end{align*} for some $\zeta_n$ between $Y(x_n)$ and $y_n$. Now I quote the book (note that the book writes "$y_h(x_n)$" for $y(x_n)$ to emphasize the dependence on $h$):

From the convergence of $y_h(x)$ to $Y(x)$ on $[x_0,b]$, we know that the partial derivatives $\partial f(x_n,\zeta_n)/\partial y$ approach $\partial f(x,Y(x))/\partial y$, and thus they must be bounded in magnitude over $[x_0,b]$. Pick $h_0 > 0$ so that \begin{equation} \hspace{2cm} 1 + h \frac{\partial f(x_n,\zeta_n)}{\partial y} \geq -1, \quad x_0 \leq x_n \leq b \hspace{2cm} (6.2.26) \end{equation}

From [the assumption that $\frac{\partial f}{\partial y} \leq 0$] we know the left side is also bounded above by 1, for all $h$. Apply these results to [the inequality for $e_{n+1}$] to get $$ |e_{n+1}| \leq |e_n| + \frac{h^2}{2} |Y''(\xi_n)|. $$

By induction we can show \begin{align*} |e_n| \leq |e_0| + \frac{h^2}{2} \left[ |Y''(\xi_0)| + \cdots + |Y''(\xi_{n-1})| \right], \end{align*}

which easily leads to [the desired result]. $\qquad \square$

My questions.

(Q1) When it says "the partial derivatives $\partial f(x_n,\zeta_n)/\partial y$ approach $\partial f(x,Y(x))/\partial y$", does this mean the following?

\begin{align*} \hspace{1.5cm} \max_{0 \leq n \leq N(h)} \left| \frac{\partial f}{\partial y}(x_n, \zeta_n) - \frac{\partial f}{\partial y}(x_n,Y(x_n)) \right| \to 0 \;\text{ as }\; h \to 0. \hspace{1.5cm} (1) \end{align*}

(Q2) When it says that the partial derivatives "must be bounded in magnitude over $[x_0,b]$", does this mean the following?

There exists some number $M > 0$ such that \begin{align*} \hspace{1.5cm} \left|\frac{\partial f}{\partial y}(x_n,\zeta_n) \right| \leq M \quad \textit{for all } h > 0 \textit{ and all } n = 0,1,\ldots,N(h). \hspace{1.5cm} (2) \end{align*}

(Q3) If my interpretations above are correct, how does (2) follow from (1)?

Also, I should mention that I am pretty sure that "the convergence $y_h(x)$ to $Y(x)$ on $[x_0,b]$" is in the sense of Theorem 6.3, which says that under the assumptions of the corollary (except condition 3, which is not needed), the Euler's method errors satisfy \begin{align*} \max_{n=0,1,\ldots,N(h)} |e_n| \leq e^{(b - x_0)K} |e_0| + \left[ \frac{e^{(b-x_0)K} - 1 }{K} \right] \frac{h}{2} \|Y''\|_{\infty} \end{align*}

1

There are 1 best solutions below

0
On

I'm pretty sure the answer to (Q1) and (Q2) are both yes. However, the author's sentence "From the convergence of $y_h(x)$ to $Y(x)$ on $[x_0,b]$..." is unnecessary and confusing. The bound $\left| \frac{\partial f}{\partial y} \right| \leq K$ follows immediately from the Lipschitz condition on $f$.