Summary:
I am currently studying various mathematical functions and their real-world applications. I'm particularly interested in trigonometric functions $( \sin, \cos, \tan )$ and the natural logarithm $( \ln )$. I understand that there are series expansions like the Taylor Series for approximations, but I am curious if there are more computationally efficient methods.
What I Know:
I am familiar with basic trigonometry and calculus, including Taylor and Maclaurin Series for approximating functions. I've also looked into some numerical methods like Newton's method for root finding, which sometimes get applied in a related manner.
What I Don't Understand:
What are some of the most accurate and computationally efficient methods to approximate $ \sin(x) , \cos(x) , \tan(x) $ and $ \ln(x) $? Are there algorithms or formulas that provide a good trade-off between accuracy and computational cost?
What I've Tried:
I've implemented Taylor Series approximations in Python, but I find them to be computationally expensive for high degrees of accuracy. I am looking for alternatives that might be more suitable for real-time or embedded systems where computational resources are limited.
For sine and cosine: You only need to calculate sin(x) and cos(x) for -pi/4 <= x <= pi/4. For x outside this range, you can reduce the argument into this range.
In these ranges, you could use a Taylor polynomial. But a Taylor polynomial is highly accurate near x = 0, and much less accurate at the end of the range, so you need a rather high degree. You also likely want less error for sin x for small x, so write sin x = x - x^3 f(x^2), and cos x = 1 - x^2 * g(x^2) and find approximations for those on 0 <= x <= pi^2/4.
You can find a good approximation by looking for an interpolation polynomial. For example, for a good 2nd degree polynomial, you pick three points between 0 and pi^2/4 and find the polynomial that interpolates f, g in these points (different points for f and g). Chebychev showed that you get a polynomial of degree n with minimal overall error if you find one that has the maximum error in n+2 consecutive points with opposite sign.
Now remember we said x = x - x^3 f(x^2); the Taylor polynomial is x - x^3 / 6 + x^5 / 5! - x^7/7^ ... so f(x^2) is 1/6 - x^2 / 5! + x^4 / 7! ... and f(x) = 1/6 - x/5! + x^2/7! + x^3/9! Instead of approximating f(x), approximate f'(x) = f(x) - (1/6 - x^5! + x^2/7!) = -x^3 / 9! + x^4 / 11! + x^5/13! ...
You find the optimal n-th degree polynomial as an interpolating polynomial of degree n+1 having the maximum error in n+2 points with alternating signs. For f' the numbers are quite small, so if you want the best third degree polynomial you can just create a spreadsheet where you enter four points between 0 and pi^2/4, calculate the coefficients of the interpolating polynomial, draw a graph of the error, and move the interpolation points until you get five times the same maximum error with opposite sign.