Evaluate $\pi$ more efficiently using polynomials of lower degrees

208 Views Asked by At

I know that you can use $\pi = 4\arctan(1)$ to evaluate $\pi$. The Taylor series of $\arctan(x)$ is $$ x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + \cdots = \sum_{k=1}^\infty \, (-1)^{k+1} \cdot \frac{x^{2k-1}}{2k-1}. $$

But I was thinking that you could get the series to converge quicker than the standard arctangent Maclaurin series by using this: $$ \frac{\pi}{4} = 4\arctan\frac{1}{5} - \arctan\frac{1}{239}. $$ Would that work?

1

There are 1 best solutions below

1
On BEST ANSWER

There is much information at https://en.wikipedia.org/wiki/Machin-like_formula – here I present some of the information from that essay.

Machin's formula, $$ {\pi\over4}=4\arctan{1\over5}-\arctan{1\over239} $$ has been mentioned. John Machin used it in 1706 to compute $\pi$ to 100 digits.

Machin-like is the name given to formulas of the type, $$ c_0{\pi\over4}=\sum_{n=1}^Nc_n\arctan{a_n\over b_n} $$ where $a_n,b_n,c_n$ are integers, $0<a_n<b_n$, $c_n\ne0$, $c_0>0$. It is common to take $a_n=1$ for all $n$.

D. H. Lehmer introduced the formula $$ \lambda=\sum_{n=1}^N{1\over\log_{10}(b_n/a_n)} $$ as a measure of the computational efficiency of Machin-like formulas – the smaller $\lambda$, the more efficient the formula. The smallest known value (in the case where $a_n=1$ for all $n$) is $\lambda=1.51244\dots$, achieved by Hwang Chien-Lih in 1997. This is attained by $$ {\pi\over4}=183\arctan{1\over239}+32\arctan{1\over1023}-68\arctan{1\over5832}+12\arctan{1\over110443}-12\arctan{1\over4841182}-100\arctan{1\over6826318} $$

In 2002, Yasumasa computed $1,241,100,000,000$ digits of $\pi$ using the equations $$ {\pi\over4}=12\arctan{1\over49}+32\arctan{1\over57}-5\arctan{1\over239}+12\arctan{1\over110443} $$ due to Kikuo Takano in 1982, and $$ {\pi\over4}=44\arctan{1\over57}+7\arctan{1\over239}-12\arctan{1\over682}+24\arctan{1\over12943} $$ due to F. C. M. Störmer in 1896.

A "binary splitting algorithm" can be used to compute the arctangents much faster than by adding the terms in the Taylor series one at a time.