We write software for managing recipes and are working on moving from an approximation based decimal to fraction conversion, for example, anything between 0.03125 and 0.09375 becomes 1/16 to a math based conversion. We are running into a few problems doing the conversion. The numbers we are dealing with here come from unit conversions of foods within a recipe.
What we need to determine is how many significant decimal points to use when converting. For example .0625 @ .01 sig decimals = 1/14 but at .001 sig decimals we get the proper 1/16. However at times we'll end up with 659999966621399 which at .01 sig decimals is 2/3 and at .001 sig decimals is 29/44.
Is there any way we can determine how best to handle this scenario?
I know this is not a programming site but here is the formula we're using
class Rational
@rationalize: (float, epsilon = .01) ->
epsilon = .01
rational = bigRat(float)
denominator = 0
numerator = undefined
error = undefined
loop
denominator++
numerator = Math.round((rational.numerator * denominator) / rational.denominator)
error = Math.abs(rational.minus(numerator / denominator))
break unless error > epsilon
fraction = bigRat(numerator, denominator)
intPart = fraction.floor()
fracPart = fraction.minus(intPart)
[intPart.valueOf(), fracPart.numerator.valueOf(), fracPart.denominator.valueOf()]
The biggest thing about recipes is "the proportions of quantities relative to each other."
The best thing (IMO) you can do, is move as much of all quantities to integers as possible by multiplying all quantities by some integer constant $c$ so that the smallest is $1$, then rationalizing all the non-integers to "reasonable" fractions (with no denominators greater than $6$, say), then scale back down by the constant $c$ and tweak again to fit with "known" measurements.
So if the proportions are something like $0.25$ liter water, $0.45$ kg flour, scaling both by $44$ yields $11$ liters water: $20$ kg flour. Direct conversion to US measurements might have yielded $1$ cup water: $1$ pound flour ~ $2$ cups flour, or it might not have been so easy...
The second tweaking step should not "double" the error accumulation or else the proportions will be quite inaccurate; if the first tweak was negative, the second should be non-negative.