Speeding up the convergence of a series

1.6k Views Asked by At

I want to speed up the convergence of a series involving rational expressions the expression is $$\sum _{x=1}^{\infty }\left( -1\right) ^{x}\dfrac {-x^{2}-2x+1} {x^{4}+2x^{2}+1}$$ If I have not misunderstood anything the error in the infinite sum is at most the absolute value of the last neglected term. The formula for the $n$th term is $\dfrac {-x^{2}-2x+1} {x^{4}+2x^{2}+1}$ from the definition of the series. To get the series I used Maxima the computer algebra system. I have noticed that to get 13 decimal places of the series one must wade through $312958$ terms of the series. I had to kill the computer GUI and some other system processes and run Maxima to compute the sum. I took about 5 minutes. The final sum I obtained was $0.3106137076850$. Is there any way to speed up the convergence of the sum? In general is there any way to speed up the convergence of the sum of $$\sum _{x=1}^{\infty }\left( -1\right) ^{x}\dfrac {p(x)} {q(x)}$$

where both ${p(x)}$ and ${q(x)}$ are rational functions?

4

There are 4 best solutions below

9
On BEST ANSWER

There are several methods to speed up the summation of series. For example Euler summation or the Shanks transformation. Here is a simple method that works quite well. Let $$f(n)=\frac{n^2+2n-1}{n^4+2n^2+1}$$ and $$g(n)=\tfrac{1}{2}f(2n-1)-f(2n)+\tfrac{1}{2}f(2n+1).$$ Then $$f(1)-f(2)+f(3)-\ldots = \tfrac{1}{2}f(1) +g(1)+g(2)+g(3)+\ldots$$ but the right hand side converges much faster than the left hand side. This is a generic method. For example if you take $$f(n)=\frac{1}{n}$$ then $$g(n)=\frac{1}{(2n-1)\cdot 2n \cdot (2n+1)}$$ and $$\log(2)=1-\frac{1}{2}+\frac{1}{3}-\ldots =\frac{1}{2}+\frac{1}{1\cdot 2\cdot 3} + \frac{1}{3\cdot 4 \cdot 5}+\ldots$$

5
On

Here is the first step toward making the series evaluate quicker. You may need just as many terms but it will be faster. Break the series up through partial fractions.

$$\dfrac {-x^{2}-2x+1} {x^{4}+2x^{2}+1} = -\frac{2 (x-1)}{\left(x^2+1\right)^2}-\frac{1}{x^2+1}$$

I will focus on the second term in the sum.

$$\sum_{x=0}^\infty \frac{(-1)^x}{x^2+1} = \sum_{n=0}^\infty \frac{1}{4n^2+1} - \sum_{k=0}^\infty \frac{1}{(2k+1)^2+1}$$

Each of the two sums can be evaluated by contour methods. This approach can be found in my post here. In short, for a sufficient rational function $f(z)$,

$$\lim_{N\to\infty} \sum_{k = -N}^{k = N} f(k)$$

is equal to the negative of the sum of the residues of $\pi f(z) \cot(\pi z)$ at the poles of $f(z)$. We can use this because we are summing an even function. This yields

$$\sum_{n=0}^\infty \frac{1}{4n^2+1} = \frac{1}{4} \left(2+\pi \coth \left(\frac{\pi }{2}\right)\right)$$

$$\sum_{k=0}^\infty \frac{1}{(2k+1)^2+1} = \frac{1}{4} \pi \tanh \left(\frac{\pi }{2}\right)$$

Subtract these (and simplify) to find

$$\sum_{x=1}^\infty \frac{(-1)^x}{x^2+1} = \frac{1}{2} (\pi \operatorname{csch}(\pi )-1)$$

Now we can conclude our first step by writing...

$$\sum _{x=1}^{\infty }\left( -1\right) ^{x}\dfrac {-x^{2}-2x+1} {x^{4}+2x^{2}+1} = -\frac{1}{2} (\pi \operatorname{csch}(\pi )-1) - \sum_{x=1}^\infty(-1)^x\frac{2 (x-1)}{\left(x^2+1\right)^2}$$

It may be tempting to try the same approach for the remaining series but we are not summing an even function so it will not be as easy.

0
On

For the summation $$S=\sum _{n=1}^{\infty }\left( -1\right) ^{n}\dfrac {-n^{2}-2n+1} {n^{4}+2n^{2}+1}$$ a CAS gave as a result $$\frac{1}{8} \left(2 \pi ^2 \text{csch}^2\left(\frac{\pi }{2}\right)+(1-i) \left(\psi ^{(1)}\left(\frac{1}{2}-\frac{i}{2}\right)+i \left((-4+4 i)+\psi ^{(1)}\left(\frac{1}{2}+\frac{i}{2}\right)+\psi ^{(1)}\left(1-\frac{i}{2}\right)\right)+\psi ^{(1)}\left(1+\frac{i}{2}\right)\right)\right)$$ which is approximately $0.31061370769015654201991515962234635408157816305055$ (this could be computed for as many significant figures as required).

Considering Brad's result, for

$$T=\sum_{n=0}^\infty(-1)^n\frac{2 (n-1)}{\left(n^2+1\right)^2}$$ a CAS gave $$\left(-\frac{1}{8}+\frac{i}{8}\right) \left((2+2 i)+(1+i) \pi \left(\pi \text{csch}^2\left(\frac{\pi }{2}\right)+2 \text{csch}(\pi )\right)+\psi ^{(1)}\left(\frac{1}{2}-\frac{i}{2}\right)+i \psi ^{(1)}\left(\frac{1}{2}+\frac{i}{2}\right)+i \psi ^{(1)}\left(1-\frac{i}{2}\right)+\psi ^{(1)}\left(1+\frac{i}{2}\right)\right)$$

0
On

This implementation of Euler's acceleration method is apparently known as Van Wijngaarden's transformation.

Euler's acceleration method, as mentioned in the comments and in other answers, can speed up the convergence of an alternating series. It is easy to look at it in terms of a more general sequence $(a_n)_{n\in\mathbb N}$ which oscillates around it's limit. If this happens, then the limit may be more accurately estimated by $(b_n)_{n\in\mathbb N}$ defined by

$$b_n=\frac{a_n+a_{n+1}}2$$

since the limit should be between consecutive terms. In the case of partial sums of alternating rational functions, this new averaged sequence will also oscillate around it's limit, so taking the average $(b_n+b_{n+1})/2$ will increase the convergence again. The limit as the amount of averaging we've done tends to infinity gives us Euler's acceleration method, though it may be more suitable to simply stop at averaging 10 times.

As far as coding this, it is very easy. Simply compute the partial sums and then average backwards until satisfied.

Let $S_n^{(m)}$ be the $n$th term after averaging $m$ times. Then you can compute it like so:

\begin{array}{c}S_0^{(0)}&&S_1^{(0)}&&S_2^{(0)}&&S_3^{(0)}&&S_4^{(0)}&\cdots\\\downarrow&\swarrow&\downarrow&\swarrow&\downarrow&\swarrow&\downarrow&\swarrow&\cdots\\S_0^{(1)}&&S_1^{(1)}&&S_2^{(1)}&&S_3^{(1)}&\cdots\\\downarrow&\swarrow&\downarrow&\swarrow&\downarrow&\swarrow&\cdots\\S_0^{(2)}&&S_1^{(2)}&&S_2^{(2)}&\cdots\\\downarrow&\swarrow&\downarrow&\swarrow&\cdots\\S_0^{(3)}&&S_1^{(3)}&\cdots\\\downarrow&\swarrow&\cdots\\S_0^{(4)}&\dots\end{array}