Is relative error a useful measure for quantification of error? Or is it just an approximation for perhaps a better error quantification method?

89 Views Asked by At

Consider the following situation. There is a partial differential equation (PDE) that has a known analytic solution, and one is solving that equation numerically on a computer. Assuming an appropriate numerical method, one will converge on the analytic solution as one increases (or decreases) appropriate numerical parameters (such as the grid spacing). Lets say the analytic solution to the PDE is given by the following graph (in one spatial dimension):

Analytic Solution

Let's say one is significantly converged in numerical parameters, so that the numerical solutions is "very close" to the analytic solution (where numeric is in blue, and analytic is in red):

Analytic and Numeric

Lets also assume further numerical refinement is prohibitive from a computational standpoint, but we have confirmed that the numerical solution is converging in the right direction towards the analytic solution (and at the right rate for our numerical scheme). So we claim success and we want publish our results in a paper. (Likely a numerical scheme would be able to converge to a more accurate solution than the above graph, but not to infinite accuracy, so we use the above as simply an illustrative example).

To quantify the error in our scheme for publication, we first look at the relative error (which is defined as $|(y_{\text{analytic}}-y_{\text{numeric}})/y_{\text{analytic}}|$, where $y_{\text{analytic}}$ is the analytic solution and $y_{\text{numeric}}$ is the numeric solution) knowing that this is a common method for error quantification. Looking simply at the relative error gives the following graph:

Relative Error

We can see that we get a relative error of around $.02$, or $2\%$ for most of the domain. However, the relative error jumps to infinity where the function crosses 0. This of course is a well-known fact with relative error, that is doesn't really work when the analytic function crosses through $0$. Some people say using absolute error there is better. Here is a plot of absolute error in those two regions:

Absolute Error in Problem Region

So we have $100$ for the absolute error in those two regions. The real question though (which is really what we are after) is if we should consider $100$ to be "huge", "moderate", or "small" as an error estimate. The absolute error alone does not tell us this, so simply using absolute error as an error estimate also has its limitations.

To quantify the "largeness" of the error, we can develop a new error estimate. To do this, we divide the absolute error by $y_0\equiv\underset{\Omega}{\max}|y_{\text{analytic}}|$, which is just the maximum of the absolute value of the analytic solution over the domain of interest. This gives us the following error estimate close to where $y_{\text{analytic}}=0$:

New Error Estimate

Here we can see that this new error estimate is $0.02$ for the problem regions, or $2\%$. This seems to be more in line with our intuition upon looking at the comparison of the numeric and analytic solutions, and shows that everywhere in the domain the error is fairly small, roughly about $2\%$. We can also extend our new error estimate to outside the problem regions. We graph both the new error estimate and the relative error in the below graph, where relative error is in blue and our new error estimate is in red:

New error and Relative error

This shows that our new error estimate is very close to the relative error except in the problem regions where it is more accurate (i.e. more helpful to us in quantifying the "largeness" of the error). What I'm claiming (and seeking thoughts on) is that the relative error is a "lazy" way to quantify the "largeness" of an error. It gets it "right" in areas where $y_{\text{analytic}}$ is not "too close" to $0$, but is extremely unhelpful otherwise. I say "lazy" because it is a local way to measure error, whereas the new error estimate requires a global analysis (finding the maximum of the function over the whole domain). Our new error estimate also avoids the question of having to define what is "too close" to $0$.

The only problem I can foresee with this new error estimate is that it gives us only one scale for the $y$-axis. Perhaps throughout different parts of the domain there are different desired $y$ scales, but this could be factored in to the above analysis, which would still give something different (and I claim, more useful) than the relative error.

Are there still situations where the relative error is more useful than our new error estimate (perhaps modified by the consideration in the above paragraph), or is the relative error simply a "lazy" way to quantify the error that is only useful as an error estimate in areas where the analytic function is not "too close" (which would need to be quantified) to $0$?

Thoughts would be appreciated. Thanks.