In applied statistics, for example, analyzing data from a science experiment, we sometimes use "absolute error" while at the most of time, we calculate the "relative error". Generally, under what circumstances should we use the "relative error" instead of "the absolute error"?
If we use relative error, are we basically implicitly assuming that the variance of the sample distribution is positively correlated with the value of the mean? Is there a fundamental axiom in mathematics or statistics called "scale independence"?
It is same in the financial math literature, the variance of the random variable $X_t$ is usually correlated with the exact value of $X_{t-1}$. If $X_{t-1}$ is greater, the variance of $X_t$ is greater. This often leads to something like a Lognormal distribution.