I would like for someone to clear my confusion. Suppose I have a pde of the form $\dfrac{\partial u\, (x,t)}{\partial {t}}\,=\,\dfrac{\partial^2 u\, (x,t)}{\partial {x^2}}+ \dfrac{\partial u\, (x,t)}{\partial {x}} \quad \text{where}\quad 0\leq\,t\,\leq T\quad \text{and}\quad -\infty<\,x\,< \infty\ $ that I appropriately discretised using a numerical scheme. Suppose I truncate $x$ with bounds $x_{\min}$ and $x_{\max}$ respectively. Next, the domain $[\,0\,,\,T\,]$ is divided into $N$ intervals and the domain [$x_{\min}$,$x_{\max}$] is divided $M$ intervals respectively. I then applied the numerical scheme. I want to test convergence rate and here is where my question stems. (assume exact value for $u$ is known)
Suppose I keep $M$ constant and double $N$ each time and tabulate my results as below:
$N \quad u \quad error \quad rate$
Suppose I get the rate to be around 2 for each $N$. Does this mean my scheme is 2nd order convergent in time?
Now, if I do the same thing for $M$, that is, I keep $N$ constant and double $M$ each time and tabulate my results as below:
$M \quad u \quad error \quad rate$
Suppose I get the rate to be around $4$ for each $M$. Does this mean my scheme is 4th order convergent in space?
So, in overall, is my scheme $2nd$ order convergent in time and $4th$ order convergent in space? Can someone confirm that I understood this properly ?
Edit:To calculate the rate, this is how I proceed.
Suppose
$M$=200 error=$e_1$
$M$=400 error=$e_2$
The ratio $\frac{e_1}{e_2} = r_1$ is fist calculated and the rate is obtained as $\frac {log \,r_1}{log \,2}$.
Wow. Someone has chosen to make this entire process mysterious. Let's first discuss what's going on.
When you double $M$, the space between the sample points halves. That is, $\Delta M$ decreases by a factor of $2$. If the error is first order in $M$, then we expect that the error also decreases by a factor of $2$. If it is second order, by a factor of $2^2 = 4$. If it is third order, by a factor of $2^3 = 8$. And, finally, if fourth order, by a factor of $2^4$.
According to your prescription for computing $rate$, error improves by the factor $e_2 / e_1$, which you don't compute, which is a little odd. So, let's try to figure out what your $rate$ is computing and how we can relate it to the very simple idea of order of convergence (which is: halve the spacing, reduce the error by what power of $1/2$?)
\begin{align*} rate &= \log_2 r_1 \\ &= \log_2 \frac{e_1}{e_2} \end{align*} so \begin{align*} \frac{e_1}{e_2} &= 2^{rate} \\ \frac{e_2}{e_1} &= \left( \frac{1}{2} \right)^{rate} \end{align*}
Now, $e_2/e_1$ is the factor of improvement of the error and $rate$ is the power of $1/2$ by which we have reduced the error. So $rate$ is the order of convergence.
Note that this very specifically depends on decreasing $\Delta M$ by a factor of $2$ (that is, doubling $M$) and decreasing $\Delta N$ by a factor of $2$ (that is, doubling $N$). If you increase by a factor of $3$, use $\log 3$ in the denominator of $rate$ and, generally, if you increase by a factor of $x$, use $\log x$ in the denominator of $rate$.