I've been told numerical methods in solving ODEs, such as Euler's method and Runge-Kutta, are all in some way approximations to Picard's iteration, and I'm trying to understand how.
Suppose we have a differential equation on an interval $[x_0,x_L]$:
$$\frac{dy}{dx}=f(x,y)$$ with initial condition $y(x_0)=y_0$
I would like to numerically solve the equation on a set of points $\{x_0<x_1<\dots<x_n\}$, i.e. obtain approximations $y_i$ to the true solution $y(x_i)$ for each $x_i$.
Picard's iteration works as follows:
$$y_{0,0}=y_0$$
$$y_{0,k}(x_1)=y_0+\int_{x_0}^{x_1} f(x,y_{0,k-1}(x))dx \;\; \mathrm{for} \;\; k \geq 1$$
suppose we stop for $k=m$, then take $y_1=y_{0,m}(x_1)$
We then repeat the process for $i \geq 1$.
$$y_{i,0}=y_i$$
$$y_{i,k}(x_{i+1})=y_i+\int_{x_i}^{x_{i+1}} f(x,y_{i,k-1}(x))dx \;\; \mathrm{for} \;\; k \geq 1$$
$$y_{i+1}=y_{i,m}(x_{i+1})$$
So is the idea of a numerical method (e.g. Euler's method) to replace the integral $\int_{x_i}^{x_{i+1}} f(x,y_{i,k-1}(x))dx$ with an approximation such as $(x_{i+1}-x_{i})f(x_i,y_i)$ (Euler's method)? What I don't understand is why numerical methods only iterate once for each point $x_i$ (in other words, $m=1$) but Picard's iteration suggests you should iterate multiple (potentially many) times for each $x_i$?
Euler method stops at just one iteration, but Runge-Kutta goes for higher iterates. For example, the RK2 (midpoint method): $$ y_{n+1}=y_n+hf(x_{n+\frac12}, y_n+\frac12hf(x_n,y_n)) $$ is the result of the second Picard iteration: the first use the constant $y^0(x)=y_n$ for $\int_0^{th}f(x_n+\xi,y^{0}(x_n+\xi))\,\mathrm{d}\xi$ with Euler and get $y^1(x_n+th)=y_n+thf(x_n,y_n)$ for $t\in[0,1]$ and then the second iteration $$ \int_0^{h}f(x_n+\xi,y^{1}(x_n+\xi))\,\mathrm{d}\xi $$ we use the mid-point rule.
Similarly, you have predictor-corrector methods which you can view the corrector method as approximating the repeated Picard iterations as many times as you want.