Earlier I both asked questions and found some interesting sources on the internet of how to do approximate division by combining and counting number of $0$s following most significant bit in denominator.
Example: We represent a fraction $7/3$ as two ordered integers $(7,3)$.
We want to calculate a floating point approximation of it.
If we multiply both numerator and denominator by 11 we get the representation $(77,33)$ (for the same fraction).
Looking at the representation of 33 in binary: 100001, we can be tempted to replace the division by a bit-shift, if we pretend the binary expansion is 100000 ( pretending the second 1 and all that follows it is a 0 ).
The relative error will be about $1/32$ or in other words $3\%$. But all involved operations are super-fast to perform in electronics hardware (as long as we know a binary multiplicative "number-friend" of 3 is 11).
Now to my question, can we apply this same idea to approximate logarithms?
For example to calculate the 2-logarithm of $3$, we calculate $3\cdot 3$ is 9, bit expansion 1001, which we pretend to be 1000, giving an approximation of $\log_2(9) \approx 3$ or with log law: $2\log_2(3) \approx 3$ and therefore $\log_2{3} \approx \frac{3}{2} = 1.5$. The real value is $1.585...$, somewhat close(?)
Given that only multiplications and bit shifts are required and that we can without trouble throw away the least significant half of the product result, could this be a feasible approach for a floating point log approximator?
I suspect that it is reasonable to believe that this has already been tried, so any references you may have to such methods are heartily welcomed.
Thanks for all the input. The algorithm I thought about was a bit more like this
( Note that if we have floating point arithmetics we can skip keeping track of the exponent. Also the division by $n$ will be quickly implementable as a floating point multiplication by $\frac{1}{n}$ which we can tabulate if we would want )
We try on the number 666 with an implementation in the c language with gcc as compiler. We provide the value from c library logf function for reference.
$$\left[\begin{array}{cccc}n&k&\text{output}&\text{logf}(666.0)\\2& 1&9.500000&9.379378\\5& 2&9.400001&9.379378\\13& 3&9.384616&9.379378\\29& 4&9.379311&9.379378\\\end{array}\right]$$
As we can see the number of iterations required to first find $k$ ones in a row grows a bit quickly. We would want to get an estimate faster, but as a first prototype it sure works.