In my numerical analysis course, we had an assignment to use MATLAB to numerically solve the Poisson Equation $-\nabla\cdot\nabla u = 0$ in one dimension.
We computed the numerical solution, plotted against the given closed form solution, and and then were told to "calculate the logarithm of the maximum relative error":
$$E=\max_{1\leq i \leq n}\log_{10}\left(\left|\frac{v_i-u_i}{u_i}\right|\right)$$
where $u$ is the closed form solution and $v$ is the numerical.
What's the idea behind taking to logarithm of this error?
The motivation behind using a relative error measure with IEEE 754 hardware is to determine the order of magnitude by which the mantissa ($d_1.d_2d_3…d_k$) differs between result and solution. (In other words: how many machine numbers lie between the computed result and the actual solution)
In practice, a computer could just compare both mantissa and evaluate their difference. However, to maintain continuity and to also factor in a value’s proximity to adjacent scales, the absolute error is divided by all of x and not just its order of magnitude $\beta^{e}$.
As a result, $$log_{10}([\![v_i-u_i]\!]_{rel})$$ gives an approximation for the number of correct digits in $v_i$.