Consider the ODE $y'=f(x,y)$ with $x_0 = 0$, $y_0 = y(x_0) = 0$. We wish to approximate $y$ on the interval $[0,1]$. Let $h_1$ be some reasonable step size and $h_2 = \frac{1}{2} h_1$. These two choices create two partitions $P_i = \{ 0, h_i, 2h_i, ..., 1 \}$ of the interval $[0,1]$ ($i=1,2$). Denote by $\tilde{y}_j(h_i)$ the $j$:th RK4 approximation of $y(jh_i)$ ($j=0,1,...$).
I have an assignment to verify that RK4 has a global truncation error of $O(h^4)$ (for a specific ODE) but I'm not sure how to do that. My best guess would be to calculate $$\frac{y(jh_2)-\tilde{y}_j(h_2)}{y(jh_1)-\tilde{y}_j(h_1)}$$ and make the argument that this is close to $1/32 = 2^{-5}$ whence GTE is $O(h^{5-1})$, but I'm not sure at all and the approximations I have don't seem to support that assertion anyway.
Let $\Sigma_h$ denote the grid \begin{equation} \Sigma_h = \{ jh \: : \: j \in \{0,1,2,\dotsc,\} \} \end{equation} and let $v_h : \Sigma_h \rightarrow \mathbb{R}$ denote the grid function obtained by applying RK4 with stepsize $h$ to your initial value problem. Let $u : I \rightarrow \mathbb{R}$ denote the exact solution of your initial value problem. You intuition is to compute the fractions \begin{equation} G_h(t) = \frac{u(t) - v_{2h}(t)}{u(t) - v_h(t)}, \quad t \in \Sigma_h \cap \Sigma_{2h} = \Sigma_{2h}, \end{equation} and one would hope to find that $G_h(t) \approx 16 = 2^4$. Unfortunately, this is not case and the numbers are in fact quite erratic. Instead, one should consider Richardson's fractions $F_h$ given by \begin{equation} F_h(t) = \frac{v_{2h}(t) - v_{4h}(t)}{v_h(t) - v_{2h}(t)}, \quad t \in \Sigma_{4h} \end{equation} Provided that the is sufficient differentiability available, you will find that these fractions tend (monotonically) to $2^p$, where $p$ is the order of the method. This nice behavior relies on the existence of an asymptotic error expansion of the form \begin{equation} u(t) - v_h(t) = \alpha(t) h^p + \beta(t) h^{q} + O(h^r), \quad p < q < r. \end{equation} Deriving the expansion is typically a bit complicated, but there is nothing stopping you from verifying numerically that Richardson's fractions behave as described.
As an illustration of this technique consider the ODE \begin{equation} u'(t) = - \frac{1}{10} u(t), \quad t \in [0,1] \end{equation} together with the initial condition $y(0) = 2$. The exact solution is of course $u(t) = 2 \exp(-t/10)$. This equation was integrated using a second order explicit Runge-Kutta method and several different stepsizes $h_j = 2^{j} h_0$. The results are given here
The last column contains the actual error, the second to last column contains Richardsons error estimates \begin{equation} E_h(t) = \frac{v_h(t) - v_{2h}(t)}{2^p-1}, \end{equation} where $p=2$ is the order. You will notice that there is exceedingly good agreement between the actual error and the error estimate. However, here you should primarily be interested in Richardson's fraction. For each row, i.e. for each fixed value of $t$ you will observe that the fraction is tending monotonically to $4 = 2^2$ as you move to the right (decreasing the value of the stepsize). This is experimental verification of the fact that there exists an asymptotic error expansion of the given form and that the dominant error term, i.e $\alpha(t) h^p$ has $p = 2$.
Richardson's full name is Lewis Fry Richardson. Richardson extrapolation is discussed in almost any introduction to numerical analysis, but it primarily used to compute error estimates rather than probe a "new" method to experimentally determine the order as I have done here. It is possible to say substantially more about this. In particular, one needs to be careful when using high order methods and small stepsizes as there are numerical issues related to the limitations of floating point arithmetic (catastropic cancellation) which will affect the behaviour of the fractions.