The Markov constant of a real number $r$ is
$$ \limsup_{d \to \infty} \frac{1}{|r-n/d|d^2} $$
where we choose the best possible $n$ for each corresponding $d$, e.g. $n = \text{round}(r\cdot d)$.
This number tells you how good the best rational approximations of $r$ get as $d$ increases and things go to infinity. This is not quite the same as looking at how good the best possible rational approximation actually is. Often there will be some rational early on which performs better than things do asymptotically within the infinite "tail." The question of what this best possible value actually is, for some real number, is rather nontrivial and interesting.
To instead evaluate the quality of the best possible rational approximation on all rationals, we can instead look at
$$ \max_d \frac{1}{|r-n/d|d^2} $$
where again, $n = \text{round}(r\cdot d)$.
Question: is there a general theory of what this value is, what it's called, and how to compute it? It is clearly related in some sense to the coefficients of the continued fraction. What reals are least approximable using this metric?
It seems clear enough that this result will have something to do with the how large the coefficients of the continued fraction get, and how early these large values occur. For instance, if $N$ is some large natural number, then we may expect $[1;1,1,N,1,1,1,...]$ to score very well, because it has a very good approximation at the simple rational $[1;1,1] = 3/2$. We should also expect it to score very slightly better than $[1;1,1,N-1,1,1,1,...]$. And both of these should score better than $[1;1,1,N-1,1,1,...]$, which instead has a great approximation at the slightly less simple $[1;1,1,1] = 4/3$.