I am quite familiar with the problem associated with loss of significance in numerical analysis. This problem seems to be an even bigger problem in the science lab, where subtracting one physical measurement from another produces potentially tremendous error because the uncertanties in the measurement don't subtract, but add.
Question: Is there a special name for this type of error associated with subtraction when applied to the physical measurement? Does anyone know any resources that specifically target this problem?
A lot of it comes down to experiment design, to measure the correct quantity where the subtraction has already happened. In high school physics I measured the index of refraction of air. I was given an interferometer with a vacuum chamber in one of the legs. The chamber was only a couple inches long, but I could set the interferometer working and count the fringes that passed by as I evacuated the chamber. My measured value agreed with the published value within $10^{-5}!$ The clever thing was the design. What I had really measured was the difference between the index of refraction of air and the $1$ of vacuum. My measurement was about $4\cdot 10^{-5}$ while the published value is $2.9\cdot 10^{-5}$, so my error was over $25\%$. But add $1$ to it and the fractional error gets much smaller.
You can also look at how the errors come in. Say you are measuring the shape of a spectral line. You change the wavelength you are sensitive to and measure the intensity. There is a certain error in your setting of the wavelength at each measurement. You might be able to argue that some of the errors are common to all the wavelength settings. The result would be to allow you to measure the width of the line better than you know the absolute wavelength of the center.