So I was implementing a simple finite difference solver for second order ODEs and decided to evaluate the convergence properties. The first problem is
$ \frac{d^{2}y}{dx^{2}} = k, \quad y(x=0) = y(x=L) = 0$.
With a central difference approximation, I get
$ \frac{y^{j+1} - 2y^{j} + y^{j-1}}{(\Delta x)^2} = k$. Nothing fancy here. So after coding it up, and solving, we get the following
So two things kind of jump at you. The first is that a simple finite difference scheme is able to acheive near machine precision accuracy with a small number of cells. The second is that the RMSE increases with mesh resolution (i.e. number of nodes) and the condition number increases as well. This part makes some sense cause we are losing quite a few decimal points of precision as we go along and we are quite accurate from the beginning.
If we were to implement a slightly more interesting problem
$\frac{d^{2}y}{dx^{2}} - kx = 0, \quad y(x=0) = y_{0}, \frac{dy}{dx}(x=L) = 0$ and similarly construct the finite difference scheme, we get the following
We can similarly see the condition number worsening which is somewhat expected. But we can see the second-order convergence properties of the central difference scheme quite nicely here.
So my question(s) are as follows:
Is the high accuracy for the first problem expected? That kind of surprised me.
Is the worsening RMSE in the first case due to the fact that we are able to solve it very accurately from the get-go while in the second case, increasing the mesh resolution helps to improve accuracy as we arent near the level of precision where the increasing condition number impacts accuracy.
Thanks for the help!




Regarding the second point: As you observe, the is RMSE still bounded by the accuracy of the finite difference approximation and not by your linear solver.
Why is this? Let's take a look at the sytem you want to solve. Multiplying the discretized PDE by $- \Delta x^2$ (did you try this to obtain maybe a slightly better conditioned matrix ?), you obtain $$ -y_{i-1} + (2 + k \Delta x^2)y_i - y_{i+1} = 0 \: \forall \:i $$ The arising matrix will be tridiagonal and in particular symmetric positive definite. This allows access to very fast and (relatively) robust iterative solvers. I do not know how you solve the arising linear system, but if you use e.g. Matlabs backslash operator
\, you access effectively those stable algorithms. In particular, a pre-conditioning might be applied, that, as the name suggests, aims to improve the condition number of the matrix before handing it over to the iterative solver, e.g. conjugate gradients.