I understand the differences between relative and absolute error. I can think of examples when relative error is a more suitable error measure but not when absolute error is more suitable than relative error.
So when is absolute error a more suitable measure of error than relative error?
I will give some examples.
There are cases where the sign of the error is vital.
When firing in support of your advancing infantry, you what the error $E = r - \hat{r}$ to be negative to avoid the possibility of killing your own troops.
When computing the strength $S$ of bridge you want to be certain that the computed strength $\hat{S}$ is less than the true strength and greater than the design specification $\tau$, i.e., $$\tau \leq \hat{S} < S.$$
Newton's method for the equation $f(x) = 0$ exhibits one sided convergence near a simple root. When the computed residual changes sign you have exhausted the machine's precision and further iterations are pointless.
When mailing gifts for your mother, you want to be sure they arrive before her birthday, not after.