I am trying to calculate the accuracy of a program.
I've looked at the accuracy formula and it looks like this:
e_value = 400
t_value = 1000
accuracy = ((t_value - e_value)/t_value) * 100
And this works as expected (40%).
What if e_value is much bigger than the t_value though?
In my case for example, my e_value is 18 while the t_value is 2.
The accuracy should (?) be a very low number as 18 is 9 times 2, but the number we get if we plug in the numbers is -800% which doesn't really seem right (and also, it's not really what I'm after, I want a number between 0-100%)
You need to decide what the percentage is for. If e_value is a number of tests passed and t_value is the number of tests given, you always have t_value greater and there is no problem. If you are just checking one computed number against the other you can. One choice is to make the denominator the greater of e_value or t_value, then subtract from $1$ when you use e-value. That will duly get your values in the $0-100\%$ range. In the case you cite, you would do $1-\frac {18-2}{18}\approx 0.11$, then multiply by $100\%$ to get $11\%$. Whether this conveys useful information is up to you.