Finding the best Taylor-esque polynomial

74 Views Asked by At

Background

Based only on the information provided by derivatives at a single point, the Taylor series provides the 'best'* approximation of the function around its radius of convergence. But, some methods can yield a series with a greater radius of convergence than the usual Taylor series. My goal is to find if there is the best approximation that takes both the radius of convergence and error in approximating the function into account.


Work done so far

I have come up with 3 different methods which extend the radius of convergence. Each is very similar. Here R is the radius of convergence of the regular Taylor series. For convenience, I am also centering also the series at $x=0$.

  1. $$\lim_{\varepsilon \to 0} \sum_{n=0}^{\frac{R}{\varepsilon}-1}a_nx^{n}, \quad a_n = \frac{f^{(n)}\left((n+1)\varepsilon\right)}{n!} = \frac{\sum_{m=n}^{\infty}\frac{f^{(m)}(0)}{m!} \frac{m!}{(m-n)!}\left((n+1)\varepsilon \right)^{m-n}}{n!} =\sum_{m=n}^{\infty} \frac{f^{(m)}(0)}{(m-n)!n!}\left((n+1)\varepsilon \right)^{m-n}$$
  2. $$\lim_{\varepsilon \to 0} \sum_{n=0}^{\frac{R}{\varepsilon}}a_nx^{n}, \quad a_n = \left(\sum_{k=0}^{n}\frac{\left(-1\right)^{\left(n-k\right)}}{\left(n-k\right)!k!}\frac{f\left(k\varepsilon\right)}{\varepsilon^{n}}\right)=\sum_{k=0}^{n}\frac{\left(-1\right)^{\left(n-k\right)}}{\left(n-k\right)!k!}\frac{\sum_{m=0}^{\infty}\frac{f^{\left(m\right)}\left(0\right)}{m!}\left(k\varepsilon\right)^{m}}{\varepsilon^{n}}$$
  3. $$\lim_{\varepsilon \to 0}\sum_{a=0}^{\frac{R}{\varepsilon}}\frac{f^{(n)}(0)}{n!}\prod_{k=0}^{a-1}\left(x-\varepsilon k\right) = \left(\sum_{K=1}^{\frac{R}{\varepsilon}}x^{K}\left(\sum_{a=K}^{\frac{R}{\varepsilon}}\frac{f^{(n)}(0)}{n!}\left(\varepsilon\right)^{\left(a-K\right)}A\left(K,a-1\right)\right)\right)+f\left(0\right), \\ \text{Where } A(w,N) = \frac{1}{2\pi}\int_{0}^{2\pi}\frac{\left(\prod_{n=1}^{N}\left(e^{\theta i}-n\right)\right)}{e^{\theta i\left(w-1\right)}}d\theta$$

Here are what the different expansions look like on the function $\frac{1}{1-x}$. Note that I'm taking the limit from the negative direction on all of these. In general, the direction in which $\varepsilon$ is essentially the direction the equation marches, and it should mark away from singularities to perform well. For each method (including the Taylor series), I am truncating the series after $x^{20}$. enter image description here Here is another instance with a somewhat more complicated function: enter image description here

Is there a known method like the ones shown above that minimizes something like the L1 norm divided by the length of convergence? Is there a natural way to measure the error of an approximation that takes into account the length of convergence and the distance away from the function (since just dividing by the length of convergence seems a bit awkward)? I would appreciate any advice towards any directions to research (is there a specific branch of complex analysis that looks at the best polynomial approximation or fixing the Taylor series to work in a larger radius)?

Here is a link to a demo on Desmos of the different methods: https://www.desmos.com/calculator/k39t4wf91q

I did some more testing, and I've found that there is at least some sense in which these methods perform better than not just a Taylor series centered around $x=0$, but a Taylor series centered anywhere. My original impression was that these methods might be somehow equivalent to centered a Taylor series around a location that is farther away from the singularities, but this is not the case. For instance, if there are singularities placed one unit apart along all of the lines $a+i$ and $a-i$, then a Taylor series cannot have a radius of convergence larger than 1. But, it's possible to use some of the previous methods and get something that converges for a range further than 1.

See this question for more details on how best depends on how a good approximation is defined: To what extent is the taylor polynomial the best polynomial approximation?