I had to come up with a simple algorithm on a low powered embedded system, because dividing numbers by 1000s to get the wanted digits (first three basically) caused a loss in precision in the floating point operations. 10 million became 9.8M for example.
Here is my basic algorithm to get 1.00k from 1000:
- Take first 3 digits
- 100
- Find magnitude by division, and subtract from integer length (digits)
- 4 - 3 = 1
- Divide by appropriate power of 10:
- 100 / 10^(3-1) = 1.00, and with magnitude known (3=k), get 1.00k.
An issue is I want to do this for numbers below 1 as well.
The first thought was inverse would work, because 1/1000 = 0.001 (k->m) but I seem to have been mistaken as the magnitude could be different on the inverted number from the non-inverse.
Is there a clever algorithm to do this?
My current code (if it needs explanation, ask) to deal with the above was this:
for (;value >= 1000*1000; value /= 1000) //until <condition>; do <condition>
magnitude+=3;
if (value >= 1000)
magnitude+=3;
diff = digitcount(value) - magnitude;
return firstthree(value) / 10^(3-diff);
I tried a for loop with value < 1; value *= 1000 but the result was something like 100m 10m 1000u 100u 10u 1000n (relying on inverse to get magnitude), or 0.1u, 0.01u, 0.001u (firstthree() fails here ovviously) which is not the autoranging I am looking for of course.
I expect maybe if magnitude below 1 can be found, I can then multiply by thousands until I have a number above 1?
Look up scaled arithmetic. This was popular in FORTH (a lightweight language that was popular in underpowered embedded systems, with integer arithmetic only).