I'm dividing number A (positive, with up to 2 decimal places) by another number B (positive, up to 3 decimal places) to arrive at my result, C.
I'd like to round C to the fewest decimal places so that multiplying it back with B is always equal to A after rounding to 2 digits.
Is there a method for determining how many decimal places are needed in C for the assumption:
round(C * B, 2) = A
to always be true?
I don't think significant figures/digits are important here, as I'm not concerned with measurement precision in any of the results -- I just need the fewest decimal places in C to satisfy that assumption. It may also be worth noting that I can't change the value of A or of B.
My gut tells me the answer is simply the total of the decimal places in A and B (5), but I can't seem to find anything online that covers rounding then reversing the operations.
Ultimately, I'm hoping to find the method for determining this number of decimal places so that perhaps I can variably set the number of decimal places to include in C based on whether there happen to be trailing zeroes in the other values.
How many decimals to the left of C depends upon how many are to the right of B. Similarly, the number of decimals you need to the right of C depends upon the size of B.
$(C + \epsilon_c)*(B+\epsilon_b) = CB + C \epsilon_b + B\epsilon_c$ (the $\epsilon_b\epsilon_c$ term is insignificant and can be ignored)
If you want $C \epsilon_b + B\epsilon_c < 0.01$ then $\epsilon_b < 0.005/C$ and $ \epsilon_c < 0.005/B$