A curious way of generating series expansion for $\cos x$

241 Views Asked by At

If we take the approximation $\sin x \approx x$, then, using the trigonometric identity $1- \cos 2x = 2\sin^2 x$, and take $2\sin^2 x \approx 2x^2$, we get, after making the substitution $x \to x/2$, that

$$\cos x \approx 1 - \frac{x^2}{2}$$

Now, using the trigonometric identity

$$\cos(2x)=2\cos^2(x)-1$$

and using the last approximation $\cos x \approx 1 - \frac{x^2}{2}$, then we get a new approximation for $\cos x$, namely

$$\cos(2x)\approx2\left(1-\frac{x^2}2\right)^2-1=1-2x^2+\frac{x^4}2$$

Then let $x\to\frac x2$ to get

$$\cos(x)\approx1-\frac{x^2}2+\frac{x^4}{32}$$

So, we repeat:

$$\cos(2x)=2\cos^2(x)-1\approx2\left(1-\frac{x^2}2+\frac{x^4}{32}\right)^2-1$$

and so on. This seems to generate a series expansion for $\cos x$, similar to Taylor's series, but with greater denominators.

The question is: Does the iterative procedure described above generates better and better approximations to $\cos x$, that is, a Taylor-like series one, or this iterative procedure doesn't converge to $\cos x$ to arbitrary accuraty for real $x$?

2

There are 2 best solutions below

2
On BEST ANSWER

You get weird coefficients for terms beyond $x^2$ because you re using an inaccurate input for the double angle formula. If you have

$\cos x = 1-(x^2/2)+O(x^4)$,

you must accept

$\cos 2x = 1-(2x^2)+(x^4/2)+O(x^4)$;

and putting $x/2$ for $x$ then gives

$\cos x = 1-(x^2/2)+(x^4/32)+O(x^4)$

You did not make the error term any smaller in order of magnitude than the quartic term you added, so you cannot defend the quartic term.

How to get the right quartic term? Assume

$\cos x = 1-(x^2/2)+(ax^4)+O(x^6)$.

Apply the double angle formula; after simplifying you get

$\cos 2x = 1-(2x^2)+((4a+(1/2))x^4)+O(x^6)$,

where everything that's a multiple of $x^6$ or a higher power is lost in the "noise" of the error term. Put $x/2$ for $x$ to then get

$\cos x = 1-(x^2/2)+((a/4+(1/32))x^4)+O(x^6)$.

This matches your original assumption if $a=(a/4)+(1/32)$; thus, properly, $a=(1/24)$.

0
On

I've written a little R program for your iteration. It represents the series as a sequence of their coefficients. I then define the effect of halving the input on the coefficients

halvearg<-function(x) ((1/2)^(0:(length(x)-1)))*x

and the multiplication of series

seriesprod<-function(x,y) {
    length(x)->n
    sprod<-rep(0,2*n-1)
    for (l in 1:(2*n-1)) {
        for (k in (max(l-n+1,1)):(min(l,n))) {
            sprod[l]<-sprod[l]+x[k]*y[l-k+1]
        }
    }
    return(sprod)
}

In principle, multiplication of series could be done by the convolve function in R, which uses the FFT. But it doesn't produce exactly the output I want, so I'll either have to write my own variant of convolve or try to adapt the existing one. For now, I just defined my own series multiplication.

Then, I apply your iteration scheme on the starting input

$$\cos x = 1-\frac{x^2}{2}$$

which is represented as

x<-c(1,0,-0.5)

Here's the iteration

unit<-c(1,0,0)

for (k in 1:10) {
    unit<-c(1,rep(0,2*length(x)-2))
    x<-halvearg(x)
    x<-2*seriesprod(x,x)-unit
}

Notice I don't evaluate many steps, because the sequence grows very fast, indeed the length grows according to the recursion

$$\ell_n=2\ell_{n-1}-1$$

which is exponentially fast.

Here are the first 10 terms of the series, after those 10 iterations

[1]  1.000000e+00  0.000000e+00 -5.000000e-01  0.000000e+00  4.166663e-02
[6]  0.000000e+00 -1.388882e-03  0.000000e+00  2.480126e-05  0.000000e+00

Here are the first ten terms of the actual Taylor series of $\cos$ for comparison

[1] 1.000000e+00  0.000000e+00 -5.000000e-01 0.000000e+00 4.166667e-02
[6] 0.000000e+00 -1.388889e-03  0.000000e+00 2.480159e-05 0.000000e+00

Not too bad. Here's a plot of the logarithm of the coefficients in blue at each step of the iteration. The red curve is the negative logarithm of the gamma or factorial function. You can see it converges pretty fast.

Iteration convergence