Is approximation techniques good for only region closer to the known point?

64 Views Asked by At

I got to read about Linearization and Quadratic approximation and in general approximation theory. From what I observed from the examples discussed there, it seems like the approximation works only for points closer to the known point x = a. For points farther away from that point x=a, we may get more error in the approximation so it is not preferable.

Question 1:

Is my understanding about region of approximation correct ?

Question 2:

If we extend the degree of the approximation terms in taylor series to reasonable n, what is exactly happening ? how does the accuracy increase ? what if there are many bumps in curve near the point of approximation ? Can someone give an example for a complex function and show ?

Question 3:

If the above approximation method just approximates the region surrounding a point, i would like to know if there is any way to determine a approximation of the entire function (not just region close to desired point) and get a decent error ? if so what is it ?

Is this technique (answer of above part in question 3) is the one used in machine learning(in computer science) in regression to predict the new output when we already know a set of old inputs and outputs ?

1

There are 1 best solutions below

0
On

Taylor methods, except for analytic functions, don't give a good approximation on the entire domain. In fact their accuracy may only improve on a shrinking sequence of intervals. This shrinking occurs for the classic example $f(x)=\begin{cases} 0 & x=0 \\ e^{-1/x^2} & \text{otherwise} \end{cases}$ for Taylor approximants at $x=0$ (which are all just zero).

There are non-Taylor methods for approximating functions. The "holy grail" is the minimax polynomial of a given degree for a function on an interval, which minimizes the maximum error on the interval. This usually cannot be calculated. A more feasible but of course less accurate alternative is to construct an interpolating polynomial at appropriate evaluation points. A good choice of such points is the Chebyshev nodes; a bad choice of such points (for non-analytic functions) are evenly spaced nodes. (To see the reason, look up the Runge phenomenon.)

One can also use non-interpolatory methods based on minimizing some error functional (usually least squares type error). This is the most common choice in machine learning (where we usually have very high-dimensional data with relatively low-dimensional behavior). The difficulty is coming up with the structure of an appropriate model; merely finding the parameters of the model is straightforward even though it is sometimes laborious.