I have a question about the Convergence Definition for Finite Difference Scheme. The definition is given by
Convergence: for one-step schemes approximating a IBVP to be convergent we compare $U(x,t)$ (true solution) and $U^n_m$ (numerical solution): if $U^0_m$ converges to $U_0(x)$ as $mh\rightarrow x$ then $U^n_m$ converges to $U(x,t)$ at $(m_h, nk)$ converges to $(x,t)$ as $h,k\rightarrow 0$. As $h,k\rightarrow 0$ the approximation gets uniformly closer to exact solution on the lattice.
What is the mean of if $U^0_m$ converges to $U_0(x)$ as $mh\rightarrow x$? I am thinking that the initial $U^0_m$ is always given by the initial conditions which is naturally the solution $U_0(x)$. Why does it talk about 'convergent' of $U^0_m$ here?
Thank you very much.
While $U^0_m = U_0(mh)$ is a natural choice for the discrete initial condition it isn't the only one. There are situations in which other discrete initial conditions might be better and the definition you gave takes that into account.
Consider e.g. the case that $U_0$ is highly oscillating, compared to your grid size: $U^0_m = U_0(mh)$ would reflect the actual behavior of your initial condition quite poorly and your discrete solutions would depend strongly on your grid. To prevent this you could define $U^0_m$ as the average of $U_0$ around $x = mh$ in some sense.