Choosing degree of Chebyshev approximation

1.9k Views Asked by At

Chebyshev approximation approximates a function by fitting a weighted sum of Chebyshev polynomials to it. It requires evaluating a function at $N$ sample points to form the weighting coefficients. Interestingly the weighting coefficients ($c_k$ in the link) approach zero as k gets larger, regardless of N, and usual practice is to truncate the series after a few coefficients are within some threshold of $0$.

From some quick and dirty tests, it looks like the number of $c_k$ terms you get before you can truncate doesn't really increase as $N$ increases. (Is this true?) As in, you'll need about 58 terms at $N=64$ and $N=8192$.

If so, you want to choose $N$ to be as small as possible while still giving you all the terms you need, to make the fitting faster. But the rub is that you don't know how many terms you'll need until you go through the iterations for some $N$. Overestimating isn't so bad, since you can abort the $O(N^2)$ loop partway through once you decide to truncate the series. But underestimating $N$ seems like you have to throw out all your work and start over.

Assuming you have some knowledge of the function and the size of your approximation window, is there any guidance for choosing an appropriate $N$? Maybe how "bumpy" it is, for some definition of bumpy?

Alternatively is there a way to "seed" an approximation with larger $N$ given an approximation at smaller $N$? eg: use the function evaluations you've done for $N=64$ to help with $N=128$? Looking at the math it didn't seem like it to me, but maybe there's some clever tricks?

1

There are 1 best solutions below

9
On BEST ANSWER

To the best of my knowledge, the state of the art in Chebyshev approximation is the code in the Chebfun package. Depending on what you're doing, maybe you can either use Chebfun, or you can copy its algorithms (it's written in the Matlab language). There is also a partial port of Chebfun to the Python language, which might make it easier to read or borrow the code: Pychebfun on Github.

The algorithms are well described in their documentation, and in a book by Lloyd Trefethen, which I highly recommend.

The behaviour you're seeing is not surprising -- the approximation error is mostly dependent on the degree of the approximant, and this degree is the same regardless of whether you truncate a polynomial of degree 64 or 8192. Chebfun makes it very easy to play with alternative strategies until you find one that meets your needs.

There are theorems that give you bounds on the approximation error. They go by the name of "Jackson" theorems (after Dunham Jackson). You can find them in any decent textbook on approximation theory. As you suspected, the error bounds depend on the differentiability of the function you're approximating.

Note that Chebfun constructs approximants using Lagrange interpolation at Chebyshev points (extrema of Chebyshev polynomials, (which are $x_k = \cos(k \pi / n), \; 0 \le k \le n$), not by using the technique shown on the MathWorld page you referenced. The coefficents of the Chebyshev series are obtained from this intepolant. It can be shown that this Lagrange approach might increase the error by a factor of $\pi/2$ (see Trefethen's book for details). They successively do this Lagrange interpolation with polynomials of degree 8, 16, 32, ... until they find one whose trailing Chebyshev coefficients are negligible. Then they truncate this, if possible. In the approximation process, most of the work is calculating function values. These values can be re-used each time you double the degree, because the extrema of a Chebyshev polynomial of degree $2n$ are a superset of those for degree $n$.