I am interested in the idea of arithmetic performed in binary done without electronics. In particular, I’d like to try calculating the binary logarithm of x (where 10 > x >= 1) using the square/division algorithm wherein I square the mantissa, write a 0 if the result is less than 10, else write a 1 and divide the mantissa by 10.
(The above numbers are in binary. The subscript I tried to use for indicating the base looked awful.)
The problem is the bit length for intermediate values. If I want to find the binary log to sixteen or twenty bits, my working value will have a bit length in the millions of bits.
What I would like to know specifically is how much and how often I can truncate the intermediate values without losing accuracy for the sixteen significant bits and for n significant bits in general. I figure that I need four or five extra bits for each position of the approximation, but I’d like to be sure.
EDIT: In other words, if I did all the work with the millions of bits, exactly how much effort would I be wasting? Is there a general rule that allows me to truncate each intermediate mantissa after m bits without making any difference to the first n bits of the log, and if so, what is the relationship between m and n?
EDIT: Question reworded to be more clear. I apologize.
Can I truncate the intermediate values without losing absolute accuracy?
Short answer: no.
Not so short answer: Regarding add (and substraction) absolute errors sum up, for multiplication (also dividing) relative errors sum up. While only interested in a result of 20 bits you may calculate what truncation is possible without harm.
$2^{20}\approx 10^6$, thus six or better seven decimal digits of accurancy should be ok.