I have a set of $N$ data, $(x_i,y_i)$,which comes from sampling on $sin$ function. Assuming the sampling period is $T$, then $x_i = iT$. If interpolating them without centering, namely $x_i = T, 2T, 3T .... nT$, the resulting polynimial is easy to diverge when $n$ increases. However, if centering $x_i$, namely replace $x_i$ with $\hat{x_i} = iT-NT/2$, the result is much better.
It seems this kind of centering is good for having a stable interpolating algorithm. What's the theory behind this phenomenon? Many thanks :)
For a better understanding of this question, let's take an example. Assuming $n=3$. and the polynomial to be determined has the form
$ f(t) = a +bx + cx^2$.
A sequence of three samples from the sin function is :
$ (T,sin(T))$ $\:-$ $(2T, sin(2T))$ $(3T, sin(3T))$
We can use this sequence of data to solve the above polynomial. However, if we centering the sequence of data to
$ (-T,sin(T))$ $(0, sin(2T))$ $(T, sin(3T))$
the resultant interpolated polynomial would be more stable then the previous one.
A few more words: the above example is just an simplified example. The phenomenon is only obvious when N is large.