Caveat: I've already searched for this topic here in MSE but also on other sites, but I have not still found anything that can answer my doubt.
I'm checking my own implementation of a code. The context is not important. The correct value is $x=e^{-200}$, and I computed $\hat{x}$ with my routine.
I computed the absolute error $|x- \hat{x}|=1.2\cdot10^{-14}$. This means that in $\hat{x}$ I have 14 correct digits of $x$.
If I compute now the relative error I have $\frac{|x-\hat{x}|}{|x|} = 8.67 \cdot 10^{72} $. I know it is related to the number of significant digits, and in this case the denominator $x=e^{-200}$ is not zero (even in is really small).
I'm really puzzled because I can't understand what is going on: I mean, the result of the relative error is saying me that the approximation is poor? But the first $14$ digits are equal.
Your $x$ has 86 zeros after the decimal period. So having determimed the first 14 zeros you are still 72 zeros away from the real thing.
More technically, your error is $10^{72}$ bigger than the actual $x$, which is what your quotient is showing.