Efficient and Accurate Formulas for Approximating sin x , cos x , tan x and ln x.

400 Views Asked by At

Summary:
I am currently studying various mathematical functions and their real-world applications. I'm particularly interested in trigonometric functions $( \sin, \cos, \tan )$ and the natural logarithm $( \ln )$. I understand that there are series expansions like the Taylor Series for approximations, but I am curious if there are more computationally efficient methods.

What I Know:
I am familiar with basic trigonometry and calculus, including Taylor and Maclaurin Series for approximating functions. I've also looked into some numerical methods like Newton's method for root finding, which sometimes get applied in a related manner.

What I Don't Understand:
What are some of the most accurate and computationally efficient methods to approximate $ \sin(x) , \cos(x) , \tan(x) $ and $ \ln(x) $? Are there algorithms or formulas that provide a good trade-off between accuracy and computational cost?

What I've Tried:
I've implemented Taylor Series approximations in Python, but I find them to be computationally expensive for high degrees of accuracy. I am looking for alternatives that might be more suitable for real-time or embedded systems where computational resources are limited.

4

There are 4 best solutions below

0
On

For sine and cosine: You only need to calculate sin(x) and cos(x) for -pi/4 <= x <= pi/4. For x outside this range, you can reduce the argument into this range.

In these ranges, you could use a Taylor polynomial. But a Taylor polynomial is highly accurate near x = 0, and much less accurate at the end of the range, so you need a rather high degree. You also likely want less error for sin x for small x, so write sin x = x - x^3 f(x^2), and cos x = 1 - x^2 * g(x^2) and find approximations for those on 0 <= x <= pi^2/4.

You can find a good approximation by looking for an interpolation polynomial. For example, for a good 2nd degree polynomial, you pick three points between 0 and pi^2/4 and find the polynomial that interpolates f, g in these points (different points for f and g). Chebychev showed that you get a polynomial of degree n with minimal overall error if you find one that has the maximum error in n+2 consecutive points with opposite sign.

Now remember we said x = x - x^3 f(x^2); the Taylor polynomial is x - x^3 / 6 + x^5 / 5! - x^7/7^ ... so f(x^2) is 1/6 - x^2 / 5! + x^4 / 7! ... and f(x) = 1/6 - x/5! + x^2/7! + x^3/9! Instead of approximating f(x), approximate f'(x) = f(x) - (1/6 - x^5! + x^2/7!) = -x^3 / 9! + x^4 / 11! + x^5/13! ...

You find the optimal n-th degree polynomial as an interpolating polynomial of degree n+1 having the maximum error in n+2 points with alternating signs. For f' the numbers are quite small, so if you want the best third degree polynomial you can just create a spreadsheet where you enter four points between 0 and pi^2/4, calculate the coefficients of the interpolating polynomial, draw a graph of the error, and move the interpolation points until you get five times the same maximum error with opposite sign.

0
On

Taylor/Mclaurin expansions are no good because they give small error only near a single point. You want an approximation that has small (and roughly equal) error over an entire interval.

As the other answer suggested, Chebychev polynomials are a good tool. Obtaining truly “best” approximations is difficult, but Chebychev interpolants are easy to construct and are almost as good. You can make speed/accuracy trade-offs just by adjusting the degree of the polynomials.

You can find all the details in Lloyd Trefethen’s excellent book entitled “Approximation Theory and Practice”. The algorithms described in the book are implemented in a Matlab add-on package called chebfun. The chebfun software and some chapters of the book are freely available at the chebfun web site.

You might also find this page useful.

0
On

In general, if f(x) is continuous on an interval and n is an integer, then according to Chebyshev there is a polynomial of degree n which minimises the largest absolute error in the interval, and this polynomial will have the same absolute error in n+2 points in the interval with alternating signs.

That polynomial is optimal because any polynomial that reduces the error in n+1 points will increase it in the n+2nd. And without the condition being met, we can reduce the error in n+1 points while increasing it only slightly elsewhere.

Because you have n+2 points with errors with different signs, the error must be 0 in n+1 points in between. So you can start with any interpolating polynomial in n+1 points, then move these points to make up to n+1 largest errors smaller.

Now if you found a good polynomial and want to evaluate it with floating point arithmetic: You need to reduce the rounding errors as well. So you figure out how you calculate the polynomial, then set up a linear programming problem to minimise the error, including the rounding error. However, you can’t have any real number as coefficients, they must be floating point numbers. So you minimise the total error. For the sine function you might fix the first coefficient to 1 (otherwise sin x will have an error for very small x) and optimise. Then you calculate a coefficient for x^3, close to -1/6 likely. You round it to a floating point number, fix it, and optimise the other coefficients. The result will be a formula that is very close to producing correctly rounded results.

0
On

As you may have noticed from the comments and the other answers, your question does not have a purely mathematical answer. That is because you are asking about methods that are "computationally efficient". What this means is highly contextual, with mathematical aspects as well as software and hardware aspects. It depends, at least, on the following:

(a) How much accuracy do you need? A few bits? Double precision? Millions of decimals?

(b) How much memory are you willing to sacrifice for an additional boost of computational speed?

(c) How are you implementing the functions? Directly on the hardware? In assembly code? A software implementation using a high level language? Do you have access to fast implementations of square roots, division, fused multiply-add, ...?

Very crudely, algorithms for elementary functions can be grouped into the following categories:

(1) For a few bits of precision, shift-and-add algorithms can be used. These are a type of iterative algorithm with slow convergence but which only use (cheap) additions and shifts. This usually requires very little space for memory. CORDIC belongs to this category. Such algorithms are usually reserved for hardware implementations.

(2) For some elementary functions, notably square roots and logarithms, bit manipulation may very cheaply provide a crude estimate. These methods utilize properties of the the floating point format to approximate functions.

(3) Polynomial or rational approximations can be used when a little more precision is needed. Usually, the approximation is chosen to minimize the pointwise maximum error in some finite domain (more about that later) using e.g. Remez's algorithm.

(4) Table-based methods simply tabulate the values of functions in some interval. How fine-grained and accurate the table is depends on the desired precision of the function. This might require quite a bit of memory.

(5) If very precise evaluations are needed (thousands of decimals, say), algorithms based on iterated means can be very efficient. E.g. the logarithm can be computed by alternating evaluations of arithmetic and geometric means.

Most of the techniques above begin by swapping $x$ for another value $t$ that is guaranteed to be in some small, selected interval, $t \in [a,b]$. Special properties of the functions are then used to transform back, after the computation is done. This process is known as range reduction or argument reduction and is an art of its own. For trigonometric functions, a quarter period suffices as a range. For logarithms, the argument is typically reduced until it is located in some small region around 1.

Finally, there are algorithms that combine ideas from the categories above. E.g. the implementation of log(x) in the Julia language uses a combination of lookup tables and polynomial interpolation (after an initial range reduction).

In summary, your question is difficult to answer and not purely mathematical in nature. Hopefully you have some new pointers to what you might want to look into for more information.