Might be more of a Comp Sci question so apologies if it's not appropriate.
Basically I have a range bounded by two floating-point decimals <1. I need to find a short binary number lying between them.
For example: [.665811575, .6658289)
My method is to choose the shortest decimal number in the interval (.66582) and apply the following algorithm:
If >=0.5 write down "1" and subtract 0.5
Else if < 0.5 write down "0"
Multiply by 2
Repeat until number is 0.0
This works however the shortest decimal leads to quite a long binary number. A script I wrote spat out a 53 bit binary number. This is an exam question so I don't have time to do 53 iterations of the algorithm.
Does anyone know how I can pick a better number from the range which will lead to a shorter binary number? Or maybe you have a better approach?
Thanks
If I recall correctly, your method is near what I'd suggest.
You want accuracy to 1 in 100000, or 2^17 (131,072) so I believe 17 digits after decimal should do it. It may very well be an infinite non-repeating series, so I'd quit after either X iterations (the 17 I suggest) or until the a less than x less than b is favorable. The '0' may never occur.
Note to readers, it's been a long time since I've written any code. An edit would be appreciated, as I think this is pretty close.
It turns out 18 is correct for this question, (please verify)
.101010100111001011
equals
0.665813446044921875
which is between .665811 and .665828