Iterative calculation of $\log x$

5.6k Views Asked by At

Suppose one is given an initial approximation of $\log x$, $y_0$, so that: $$y_0 = \log x + \epsilon \approx \log x$$

Here, all that is known about $x$ is that $x>1$. Is there a general method of improving that estimation using only addition & multiplication, i.e. without exponentiation or logarithms? $$y_1 = f(y_0, x)=\ ?$$

4

There are 4 best solutions below

3
On

Instead of solving $y - \ln(x) = 0$ for $y$ you can solve $g(y) = e^y - x = 0$

Given initial approximation $y_0 \approx \ln(x)$ you can try to solve $y$ using Newton's medhod:

$$y_{n + 1} = y_n - \frac{g(y_n)}{g'(y_n)}= y_n - \frac{e^{y_n} - x}{e^{y_n}} = y_n - 1 +\frac{x}{e^{y_n}}$$

When exponentiation isn't allowed you can approximate $e^{y}$ with $\left(1 + \frac{y}{2^m} \right)^{2^m}$ using repeated multiplication: $$\begin{aligned} \left(1 + \frac{y}{2^m} \right)^{2^m} &= \left(1 + \frac{y}{2^m} \right)^{2^{m - 1}} \cdot \left(1 + \frac{y}{2^m} \right)^{2^{m - 1}} \\ \\2^{-m} &= \frac{1}{2} \cdot 2^{-m + 1}\end{aligned}$$

0
On

In any case, you also need to know $x_0=e^{y_0}$.

Then the correction is given by $y-y_0=\ln\left(\dfrac x{x_0}\right)=\ln(1+\delta)$, where $\delta$ is small. You can evaluate it by Taylor which will converge linearly,

$$\ln(1+\delta)\approx\delta-\frac{\delta^2}2+\frac{\delta^3}3\cdots$$

You can also limit yourself to the first term $$\ln(1+\delta)\approx\delta,$$

but then you will have to correct with $x_1=x_0e^{-\delta}\approx x_0\left(1-\delta+\frac{\delta^2}2\cdots\right)$, which converges a little faster.

I don't think there is a shortcut.


A similar approach is given by the CORDIC method.

Precompute a number of decreasing constants $l_k=\log(a_k)$, for instance $a_k=1+2^{-k}$.

You will apply the following steps for all $k$ in turn:

$$\text{while } a_kx_n<x\to x_{n+1}=a_{n+1}x_n, y_{n+1}=y_n+l_k.$$

(If $a_{k+1}>\sqrt{a_k}$, an $\text{if}$ suffices.)

0
On

I don't know of any way to improve a close guess of the log as you do with square root in Newton-Raphson unless you have the inverse of the log function. Note that the exponential is easy to evaluate for small arguments because it is a rapidly converging power series.

If you have an exact log of a number z that is close to x, calculate $w=x/z$. $w$ will be close to 1. And it is quick and easy to calculate the natural log of numbers close to 1.

Crude: use $\ln(w) = w-1$

Less crude: $a1 = (1+w)/2$, and approximate $\ln(w) = (w-1)/a1$. This looks like trapezoidal integration 1 to w of dt/t. You could do better with Simpson's rule and other numerical integration methods, but that is not where I want to lead you.

Better: $g1 = \sqrt w$, and approximate $\ln(w) = (w-1)/g1$

Continuing, we can get better divisors. $a2 = (a1+g1)/2, g2 = \sqrt{a2*g1}$, etc. This does not have quadratic convergence like the Gauss AGM algorithm, but it can be accelerated by the same method used in the HP calculator's Romberg accelerated trapezoidal integrator. See Carlson, "An Algorithm for Computing Logarithms and Arctangents", Mathematics of Computation, April 1972. Only a few iterations are needed for very great accuracy. This was taught to me by Nate Grossman of UCLA at a Forth Interest Group meeting way back in the last century. (Another of his favorite methods was Newton-Raphson on the exponential.)

A lot of our numerical technology involves strategies for avoiding "slow" operations on the computer: multiplication, inverse, square root, etc. Some day these will be incorporated into hardware and take only a single machine cycle. New worlds of high-speed computing will open up. Until then, fast calculation of logs is a core infrastructure of number crunching and a rich field for further exploration.

This is not exactly what you wanted, but I hope it is helpful.

0
On

A long time ago, in the last century, I had an early calculator with no log function. To do a log, I would hit the square root button ten times, subtract $1$, and multiply by $1,024$. The answer was good to $4$ decimal places, which was better than my two calibrated sticks could deliver. Square root is a powerful function that can be used to approximate many others.

Do not ridicule calibrated sticks; they did most of the engineering that got us to the moon! I recently built a slide rule from Starbucks sticks that does multiplication modulo $23$. I used logs to base $7$.