Is there a known general numerically-stable way to calculate $\frac{a-b}{c-d}$, where a is very close to b and c is very close to d, and all variables are stored as floating-point with some precision?
I've come up with
$$\frac{1}{c/a - d/a} - \frac{1}{c/b - d/b}$$
which seems to be more stable, but there are still catastrophic cancellations going on here before divisions are made, causing precision loss. I'm wondering if there's a better way.
When designing algorithms for computing functions or solving equations it vital that you at all times clarify the relevant domain. Are you considering all real numbers or are you considering the far smaller set of floating point numbers.
Consider the problem of writing a computer program which can computed, say, $f(x) = \exp(x)$ for all real $x$ in the representable range.
Even under the best of circumstances you cannot hope to feed the real number $x$ into your program. The best you can hope for is the floating point representation $\hat{x}$ of $x$. The impact of replacing $x$ with $\hat{x}$ can be measured in terms of the condition number of $f$ at the point $x$, i.e. $$\left|\frac{f(x) - f(\hat{x})}{f(x)} \right| \approx \kappa_f(x) \left| \frac{x - \hat{x}}{x} \right|.$$ There is nothing that you can do to change the condition number. If greater accuracy is required, then you must either buy more expensive hardware or simulate a smaller unit round off.
Then you should turn your attention to computing $f(\hat{x})$ as accurately as possible. In this phase, $\hat{x}$ is a floating point number and it is exact. It is now your responsibility find a equivalent formula for $f$ which is free from any numerical problems. This is a question of mathematical skill. In our toy example, you may choose to truncate the Taylor series of $f$ when the next terms is irrelevant compared with the current sum. Certainly, this is slow, and it does not work for negative values of $\hat{x}$, but it is accurate for positive values of $\hat{x}$. Here there is no subtractive cancellation as you just adding positive numbers. You can accurately approximate, say, $\exp(-25)$ by exploiting the identity $\exp(-25) = 1/\exp(25)$. This allows you to accurately compute $f(\hat{x})$ for all floating point number $\hat{x}$, which are so small as to not trigger an overflow.
The third and final phase of your investigation centers on how to actually evaluate your chosen approximation. In the toy example, it would be sensible to select a variant of Horner's method.
Now based on your exact choice of words, you are in the second phase, and your numbers $a, b, c$ and $d$ are floating point numbers and exact. In the absence of floating point exceptions, the computer can evaluate your fraction $$f = (a-b)/(c-d)$$ using three elementary arithmetic operations and incurring a relative error which is less than $\gamma_3$, where $\gamma_k = \frac{ku}{1-ku}$, and $u$ is the unit round off. Indeed, as gammatester correctly points out if $$\frac{1}{2} a \leq b \leq 2 a, \quad \frac{1}{2} c \leq d \leq 2 d$$ then the differences $a-b$ and $c-d$ are computed exactly and your fraction $f$ is calculated with a relative error less than $\gamma_1$ (less than $u$ if we want to be fanatical about it).
On the other hand, your rewrite is disasterous in the unique situation where, say, $a$, $c$ and $d$ are close. The divisions $c/a$ and $d/a$ will not be exact. The rounding errors incurred at this point will be magnified by the ill-conditioned subtraction $c/a - d/a$. You will have a large relative error here. It escalates from this point if all four numbers are nearly equal.