Regarding Picard's iteration its relation to numerical methods, such as Euler's method.

1k Views Asked by At

I've been told numerical methods in solving ODEs, such as Euler's method and Runge-Kutta, are all in some way approximations to Picard's iteration, and I'm trying to understand how.

Suppose we have a differential equation on an interval $[x_0,x_L]$:

$$\frac{dy}{dx}=f(x,y)$$ with initial condition $y(x_0)=y_0$

I would like to numerically solve the equation on a set of points $\{x_0<x_1<\dots<x_n\}$, i.e. obtain approximations $y_i$ to the true solution $y(x_i)$ for each $x_i$.

Picard's iteration works as follows:

$$y_{0,0}=y_0$$

$$y_{0,k}(x_1)=y_0+\int_{x_0}^{x_1} f(x,y_{0,k-1}(x))dx \;\; \mathrm{for} \;\; k \geq 1$$

suppose we stop for $k=m$, then take $y_1=y_{0,m}(x_1)$

We then repeat the process for $i \geq 1$.

$$y_{i,0}=y_i$$

$$y_{i,k}(x_{i+1})=y_i+\int_{x_i}^{x_{i+1}} f(x,y_{i,k-1}(x))dx \;\; \mathrm{for} \;\; k \geq 1$$

$$y_{i+1}=y_{i,m}(x_{i+1})$$

So is the idea of a numerical method (e.g. Euler's method) to replace the integral $\int_{x_i}^{x_{i+1}} f(x,y_{i,k-1}(x))dx$ with an approximation such as $(x_{i+1}-x_{i})f(x_i,y_i)$ (Euler's method)? What I don't understand is why numerical methods only iterate once for each point $x_i$ (in other words, $m=1$) but Picard's iteration suggests you should iterate multiple (potentially many) times for each $x_i$?

4

There are 4 best solutions below

2
On

Euler method stops at just one iteration, but Runge-Kutta goes for higher iterates. For example, the RK2 (midpoint method): $$ y_{n+1}=y_n+hf(x_{n+\frac12}, y_n+\frac12hf(x_n,y_n)) $$ is the result of the second Picard iteration: the first use the constant $y^0(x)=y_n$ for $\int_0^{th}f(x_n+\xi,y^{0}(x_n+\xi))\,\mathrm{d}\xi$ with Euler and get $y^1(x_n+th)=y_n+thf(x_n,y_n)$ for $t\in[0,1]$ and then the second iteration $$ \int_0^{h}f(x_n+\xi,y^{1}(x_n+\xi))\,\mathrm{d}\xi $$ we use the mid-point rule.

Similarly, you have predictor-corrector methods which you can view the corrector method as approximating the repeated Picard iterations as many times as you want.

0
On

The question is somewhat like asking if to compute $1+1$ one necessarily has to apply residual calculus. That said, it is more the fundamental theorem than the Picard iteration that guides the constructions of some basic numerical methods $$ y_{n+1}-y_n=\int_{x_n}^{x_{n+1}}f(x,y(x))\,dx $$ What one can reasonably motivate with this formula and related iteration schemes are one-step methods up to order 2 and Adams-Bashford-Moulton multistep formulas.

The origin of Runge-Kutta methods is indeed from attempts to elevate known quadrature methods to correspondingly higher order ODE integration methods. Runge (1895) started from the Simpson formula to get a 4-stage 3rd order method. Heun (1900), while computing his methods via matching coefficients in Taylor expansions, still extensively discussed methods based on the 4th order Gauß quadrature.

However, Kutta (1901) in introducing what is now known as general explicit Runge-Kutta methods, computed the order conditions directly from the Taylor expansions and fully solved the $s$-stage order-$s$ methods for $s=2,3,4$. Only in examples did he then take known quadrature methods as design decisions to reduce the number of free variables. This approach, find the order equations, establish some simplifying design decisions, solve the algebraic system, has remained valid for the construction all of the modern higher order methods.

0
On

All these methods do approximate the solution of the ODE. Picard's method is more a theoretical tool. It can be used to obtain approximate analytical solutions, or to prove theorems. As far as I know, it is never used with a numerical integration method, as that would be quite inefficient.

The basic numerical method is indeed Euler's, and Runge-Kutta are improvements that gives better convergence order for smooth functions. (The same way that the rectangle rule is the basic numerical integration method for explicit functions while Newton-Cotes are improvements.)

0
On

Well, the error from Euler is as 1/n while the error from Picard is a^n. Thus you got very good solution once n is large enough, while you have to take log n many steps to reach same accuracy by Picard. Also notice That it is quite expensive for each iteration in Picard iteration and thus it rarely used in numerical computation but rather in theory…where you just do the integration as a magic wand.