So I'm trying to figure out how best to implement a binary search algorithm to find the "optimal" font size for a piece of text to fit into a given space, to the nearest 0.5pt.
My understanding is this:
Say I have a starting user-defined font size of 40pt, I then check if it fits, if it does, great. Stop.
If it doesn't fit, half it to 20, see if that fits. If it does, then add half the step value (10) and try again with that value (30). If it doesn't fit, deduct half the value again (10), etc.
My issue is this: Say I know that the maximum font size that will fit is 11pt.
40 x--> 20 x--> 10 ---> 15 x---> 12.5 x--> 11.25 x--> 10.50 ---> etc..
-20 -10 +5 -2.5 -1.25 -0.75 +0.375
But at no point will I end up with 11pt.. the decimal places will just get larger and larger.
How do I stop this from happening and enforce an "end point" that I know will fit?