Deriving Newton-Gregory via Newton series

1k Views Asked by At

I am attempting a derivation of the Newton-Gregory polynomial interpolation by the means of Newton Series and the calculus of finite differences. The formula usually given as defining $s = \frac{x-x_0}{h}$ and saying $ \displaystyle p(x) = p(x_0 + hs) = \sum_{n=0}^{k} {s\choose{n}}\Delta ^n [f](x_0)$ for $k$ data points $\{x_i: x_i = x_0 + ih\}$ being interpolated by a polynomial of degree at most $k$.

My thought process was as such. We are trying to fit the data of $f$ so take its Newton expansion around $x_0$. So \begin{equation}f(x) = \sum_{n=0}^{\infty} \frac{\Delta ^n f(x_0)}{n!}(x-x_0)^{\underline{n}} \end{equation}

And this is equal to \begin{equation}f(x) = \sum_{n=0}^{\infty} \frac{\Delta ^n f(x_0)}{n!}(sh)^{\underline{n}} \end{equation}

And using the identity that $\frac{x^{\underline{k}}}{k!} = {x\choose k}$ , truncating to the degree of the polynomial and discarding any remainders \begin{equation*} \displaystyle p(x) = p(x_0 + hs) = \sum_{n=0}^{k} {sh\choose{n}}\Delta ^n [f](x_0) \end{equation*}

Which is... not the same, although very close and seems intuitive. What am I doing wrong. I am aware I have ignored convergence but let's assume $f$ has very strong properties to ensure this, I do not necessarily need a rigorous proof.

1

There are 1 best solutions below

9
On BEST ANSWER

The canonical Newton interpolation formula of the function $f(x)$, over the $n+1$ points $x=0,1,\cdots,n$ is the $n$-degree polynomial $$ p_{\,n} (x) = \sum\limits_{j = 0}^n {\left. {\Delta _{\,x} ^{\,j} f(x)} \right|_{x\, = \,0} {{x^{\,\underline {\,j\,} } } \over {j!}}} $$ where $$ \left\{ \matrix{ \Delta _{\,x} ^{\,0} f(x) = f(x) \hfill \cr \Delta _{\,x} ^\, f(x) = f(x + 1) - f(x) \hfill \cr \Delta _{\,x} ^{\,2} f(x) = \Delta _{\,x} ^\, \left( {\Delta _{\,x} ^\, f(x)} \right) = f(x + 2) - 2f(x + 1) + f(x) \hfill \cr \quad \quad \vdots \hfill \cr \Delta _{\,x} ^{\,q} f(x) = \sum\limits_{k = 0}^q {\left( { - 1} \right)^{\,q - k} \left( \matrix{ q \cr k \cr} \right)f(x + k)} \hfill \cr} \right. $$

That's because the polynomial for $x=0,\cdots,n$ takes all the wanted values $$ \eqalign{ & p_{\,n} (m)\quad \left| {\;0 \le m \le n} \right. = \sum\limits_{j = 0}^n {\left. {\Delta _{\,x} ^{\,j} f(x)} \right|_{x\, = \,0} {{m^{\,\underline {\,j\,} } } \over j}} = \quad \quad (1)\cr & = \sum\limits_{j = 0}^n {\sum\limits_{k = 0}^j {\left( { - 1} \right)^{\,j - k} \left( \matrix{ j \cr k \cr} \right)f(k){{m^{\,\underline {\,j\,} } } \over {j!}}} } = \sum\limits_{j = 0}^{m \le n} {\sum\limits_{k = 0}^j {\left( { - 1} \right)^{\,j - k} \left( \matrix{ m \cr j \cr} \right)\left( \matrix{ j \cr k \cr} \right)f(k)} } = \quad \quad (2) \cr & = \sum\limits_{j = 0}^{m \le n} {\sum\limits_{k = 0}^m {\left( { - 1} \right)^{\,j - k} \left( \matrix{ m \cr j \cr} \right)\left( \matrix{ j \cr k \cr} \right)f(k)} } = \quad \quad (3)\cr & = \sum\limits_{j = 0}^m {\sum\limits_{k = 0}^m {\left( { - 1} \right)^{\,j - k} \left( \matrix{ m \cr k \cr} \right)\left( \matrix{ m - k \cr j - k \cr} \right)f(k)} } = \sum\limits_{k = 0}^m {\left( \matrix{ m \cr k \cr} \right)f(k)\sum\limits_{l = 0}^m {\left( { - 1} \right)^{\,l} \left( \matrix{ m - k \cr l \cr} \right)} } = \quad \quad (4) \cr & = \sum\limits_{k = 0}^m {\left( \matrix{ m \cr k \cr} \right)f(k)\left( {1 - 1} \right)^{\,m - k} } = f(m) \quad \quad (5)\cr} $$

(see note)
and, as that, it gives the same result as any other polynomial interpolating approach.

Now, if the abscissas are not at $x=0,1,\cdots,n$ but at $x=x_0, x_0+h,\cdots,x_0+nh$ then you make a translation and a change of scale by putting $$ \eqalign{ & y = {{x - x_{\,0} } \over h}\quad \Leftrightarrow \quad x = x_{\,0} + h\,y \cr & f(x) = f(x_{\,0} + h\,y) = g(y) \cr} $$ thereby getting $$ \eqalign{ & q_{\,n} (y) = \sum\limits_{j = 0}^n {\left. {\Delta _{\,y} ^{\,j} g(y)} \right|_{y\, = \,0} {{y^{\,\underline {\,j\,} } } \over {j!}}} = \cr & = \sum\limits_{j = 0}^n {\left. {\Delta _{\,y} ^{\,j} f(x_{\,0} + h\,y)} \right|_{y\, = \,0} {1 \over {j!}}\left( {{{x - x_{\,0} } \over h}} \right)^{\,\underline {\,j\,} } } \cr} $$

You can express the falling factorial as $$ \eqalign{ & \left( {{{x - x_{\,0} } \over h}} \right)^{\,\underline {\,j\,} } = \left( {{{x - x_{\,0} } \over h}} \right)\left( {{{x - x_{\,0} } \over h} - 1} \right) \cdots \left( {{{x - x_{\,0} } \over h} - j + 1} \right) = \cr & = {1 \over {h^{\,j} }}\left( {x - x_{\,0} } \right)\left( {x - \left( {x_{\,0} + h} \right)} \right) \cdots \left( {x - \left( {x_{\,0} + \left( {j - 1} \right)h} \right)} \right) \cr} $$ and thus expand the previous identity within the formalism of the divided finite differences, however the substance and the connection between the two ways should now be clear, and it is clear that somewhere you shall account for the scaling factor $1/h^j$.

----- notes to the equations for $p_n(m)$ ----

line (2)
since $j$ is non-negative, the bound $k \le j$ is implicit in the 2nd Binomial C., and we can extend the sum for $k$ to $m$ (or even omit both bounds).

line (3)

the identity $$ \eqalign{ & \left( \matrix{ m \cr j \cr} \right)\left( \matrix{ j \cr k \cr} \right) = {{m^{\,\underline {\,j\,} } } \over {j!}}{{j!} \over {\left( {j - k} \right)!k!}} = {{m^{\,\underline {\,k + \left( {j - k} \right)\,} } } \over {j!}}{{j!} \over {\left( {j - k} \right)!k!}} = \cr & = {{m^{\,\underline {\,k\,} } \left( {m - k} \right)^{\,\underline {\,\left( {j - k} \right)\,} } } \over {\left( {j - k} \right)!k!}} = {{m^{\,\underline {\,k\,} } } \over {k!}}{{\left( {m - k} \right)^{\,\underline {\,\left( {j - k} \right)\,} } } \over {\left( {j - k} \right)!}} = \left( \matrix{ m \cr k \cr} \right)\left( \matrix{ m - k \cr j - k \cr} \right)\quad \left| \matrix{ \;m \in \mathbb R \hfill \cr \;j,k \in \mathbb Z \hfill \cr} \right. \cr} $$

is one of the fundamental identities obeyed by Binomial Coefficients, by some authors called "Trinomial Revision"