The field $\mathbb Q$ is dense in $\mathbb R$, so we can approximate a real number $\alpha$ by an arbitrarly close rational number $r=\frac{a}{b}$.
The purpose of Diophantine approximation is to find $\frac{a}{b}$ approximating $\alpha$ in a way that $|\alpha-\frac{a}{b}|<f(b)$ where $f$ is a function. Roughly speaking we want to compare the "exactness" of the approximation with the denominator of the approximating number. In other words we would like $\frac{a}{b}$ to be both close enough to $\alpha$ but also $b$ should be small (in absolute value).
My question is: why are we so interested in the size of "$b$"? Why such emphasis on denominators? To me it seems a naive starting point for a whole theory. Evidently there is something I don't understand.
Suppose we want to approximate a number $\alpha$ by a rational $r=\frac{a}{b}$. We obviously care about how close $\alpha$ and $r$ can get – that's the whole point of approximations. We also care about how complex our fraction $r$ is: of course we could approximate $\pi$ by $\frac{3141593}{1000000}$, but since $\frac{355}{113}$ has about the same precision and is much "simpler", we could argue that it is a much better approximation. It just so happens that the size of $b$ is quite a good indicator of both of these intuitive notions, of "closeness" and "complexity". Let me elaborate.
By itself, the denominator is a terrible measure of how close a rational approximation is. Both $\frac{335}{113}$ and something like $\frac{10^{100}}{113}$ have the same denominator, but only one of them even tries to approximate $\pi$. However, let's say that we're trying to approximate $\alpha$ by a rational number, and $b$ is a given. How close can we get? Turns out that if we create intervals of length $\frac{1}{b}$, each centered at a different fraction of the form $\frac{a}{b}$, we can cover the whole number line. Then, $\alpha$ will be in one of these intervals, and at distance of at most $\frac{1}{2b}$ from its center. Here's an example with $b=3$.
Therefore, with a big enough denominator, we can create arbitrarily precise approximations, creating the correlation between the denominator and the notion of closeness that we were talking about earlier.
And what about complexity? This one's easier to justify. If not with the denominator, how else would we quantify how complex a fraction is? The only other reasonable method that comes to mind would be to use the numerator. However, this measure would bias against big numbers, disqualifying them as complex. And although this might correspond with our numerical intuition ($7$ definitely feels simpler than $6198016$), it doesn't correspond with what we're usually interested in when talking about approximations. There's a reason we usually separate the first number in a continued fraction expansion: we mostly don't care for it.
Depending on the context, you might find better indicators of these notions. For example, when talking about intervals in music theory, since we hear frequencies logarithmically, denominators don't really manage to capture that sense of closeness. In this particular example, we also rarely care for fractions too complicated, since past a certain point, our ears are unable to detect the fineness. But in the general case, the denominator is so important, precisely because it captures intuitive concepts within a readily available value.