In RK methods you are given a (small) time increment $h>0$, then you compute an approximate solution $(y_n)_n$ of an (autonomous, say) ODE $y'=X(y)$ by computing a well-chosen linear combination of terms $X(y_n+h k_j)$ that yields $y_{n+1}$. The error is $O(h^\mathrm{order})$, and effective bounds can be obtained when enough regularity is imposed on $X$.
I'm wondering what happens when you don't fix $h$ and take it to be at each step $n$ of the form $$h:=\frac{h_0}{||X(y_n)||}$$ or $h:=\frac{h_0}{\max(||X(y_n)||,1)}$, for given $h_0>0$. This is a kind of "adaptative" method (it somehow adapts the time increment to the size of $X$), that seems to give a pretty good (accuracy)/(computation time) ratio, at least visually.
Is it known/named/studied? How would I go finding explicit error estimates?
If that helps (though I doubt it) I'm mainly interested when $y\in\mathbb C$ and $X$ is holomorphic/polynomial.