Given any positive rational fraction F of 16 maximum digits beyond the decimal point
maximum size 0.9999999999999999
minimum size 0.0000000000000001
where 1 > F > 0 and where F is the subset of numbers that are multiples
of 0.0000000000000001...
Is there an algorithm to determine the number of significant digits (D) of the
fraction similar to the method of using log10(I) to determine the significant
decimal digits of an integer?
For example: if F = 0.5000 then D = 1
if F = 0.7500 then D = 2
if F = 0.6660 then D = 3 etc. ??
I would like to multiply said fraction F by 10^D in order to create an integer I such that the integer I can be printed using simple binary to decimal methods as an alternative to using a table of 1600 constants, or performing 16 multiplications by 10 ( and analyzing each for the last case of significance ).
You can just check the lowest digit and see if it is zero. If not, you have $16$ digits. If so, check the next to lowest and keep going up until you find a non-zero digit.