I am trying to find the %Accuracy, in which I used this equation: %Accuracy = 100-%Error.
So far, I have faced two problems with this equation:
When the Exact Value is Zero, the fraction can’t be used -> Solved by adding the same value to both Exact and Measured, to avoid the null denominator
When the Measured value is twice bigger or smaller, the %Accuracy value will be in negative values, In which I don't see the meaning behind it.
For example:
if: Exact = 20; Measured = 25; %Accuracy = 75%;
But, when: Exact = 20; Measured = 45; %Accuracy = -25%;
What is the meaning of -25%? and How to change the range of [0-100], to take the values that are outside of it accurately?
For your problem 1, if the exact answer is zero, then a miss is as good as a mile. That is, any measured value other than zero is equally wrong. For your problem 2, there is a better way. If x and y are two non-zero real numbers, then define
%Error := 100 * min(1, 2 * abs(x - y) / (abs(x) + abs(y))), while if only one number is zero the %Error := 100%, and if both are zero then %Error := 0%. With this definition %Error is always between 0% and 100%. Now your definition of %Accuracy = 100-%Error behaves as you would expect.
Alternatively, you can adjust your original equation to %Error =100 * min(1, |Exact - Measured| / Exact).