I am busy with a mesh independence study in computational fluid dynamics, where I am systematically refining my mesh and monitoring a certain parameter of interest with the goal that the value should change less and less as the solution converges.
I have however reached a point where computational resources are hindering me from refining my mesh even though it does not seem like the solution has converged.
Apparently Richardson extrapolation can be used to estimate the converged value of the parameter of interest.
Lets say I have the following variables:
mesh_size = [63984,142014,346431]
parameter = [0.00324461,0.00306612,0.00304778]
Which is plotted as

It is quite clear the the curve is leveling off, but isn't quite flat yet.
How would I implement Richardson extrapolation to estimate a value of parameter at mesh_size = infinity ?
I would appreciate some help in how to implement this numerically.
Thank you in advance.
Richardson extrapolation works well if you have a theoretical reason to believe (or know) that the error will go as $h^{-n}$ for some known $n$ as $h \rightarrow 0$. Then the extrapolation is merely a matter of curve-fitting, and the art is in knowing how to weight the points. On the one hand, the points for smaller $h$ clearly are less susceptible to high-order effects in $h$. On the other hand, the honest uncertainty (including the fact that configurations may have a longer correlation in Monte-Carlo time than one would naively think) is likely to be bigger for the points at smaller $h$, since you can afford more statistics at smaller $h$.
In LQCD (Lattice Quantum Chromodynamics) this sort of problem arises in several ways. In one case, the finite lattice spacing effect is well handled by Richardson-like techniques, because theoretical renormalization group calculations tell you about the small-$h$ behavior. But even in that case, the best results come from "improved actions" which have errors that fall as a higher power of $h$.
In another case, finite volume effects are known to fall off exponentially, with a scale of the lowest effective mass in the problem. Here RE is less powerful, because you are doing a 2-parameter fit yet you can't vary that mass indepenently (the raw masses you inject in the Lagrangian get renormalized). But there is not much of a problem because you can see the leveling prominently.
But in LQCD, the most difficult extrapolation is going to light enough quark masses. And there, theory provides only broad guidance. The technique used is to push to one finer grid spacing after you think you know the result to some accuracy, to verify that the error behavior you have surmised is correct. And that one last point generally takes as much time as all the rest of the computation combined.
So my recommended mantra is:
See if there is any theoretical reason to expect your error to grow as $h^{-n}$ for some specific $n$. If so, do some careful uncertainty analysis of the individual points (perhaps by breaking the Monte-Carlo time into several parts and evaluating the quantity of interest for each part separately), and do a least-squares fit to get the asymptote.
If there is no theoretical reason to expect any specific $h^{-n}$ or other simple behavior such as an exponential approach to the limit, then after computing a few points, try to guess the form of the approach.
If you can make a decent guess as to the nature of the approach to the limit proceed as in the first case. Be careful to honestly evaluate your uncertainty, because the result may be quite sensitive to the value on an earlier point. Probably the easiest way to do an honest job of that is to asses the individual point uncertainties, and then do a quick Monte-Carlo letting each of the points vary as gaussians about their actual values to see how you answer would vary.
If you can't make a confident guess about the nature of the approach to the limit, probably the best result to quote is the result of the final calculation, with an uncertainty of the difference between the last two points, added in quadrature with your estimated uncertainty in the last point.