Error bound for Gaussian Quadrature

141 Views Asked by At

Suppose that x$_1$ , x$_2$ , ... x$_N$, are the roots of a polynomial ortoghonal to the measure w(x) in the interval [a,b] $\subset \mathbf{R} $. We know then that gaussian quadrature formula is exact for all polynomials of degree less than 2N.

In order to evaluate the integral of f(x) in the interval [a,b] we use this theorem: $$ f(x) - P_n(x) = \frac{f^{(n)}(\xi(x))}{(n)!} \Pi_{i=1}^{N} (x-x_i)$$ = R_n(x)

where $P_n(x)$ is such that $P_n(x_i) = f(x_i)$ and $\xi(x) \in$ [a,b]

Using gaussian quadrature the error we make is $$ E_n[f] = \int_{a}^b R_n(x) dx $$

How can we show that the error is in reality this quantity: $$ \frac{f^{(2n)}(\xi)}{(2n)!} \int_{a}^b \left( \Pi_{i=1}^N (x-x_i) \right)^2 dx$$

Thank you so much in advance.

1

There are 1 best solutions below

0
On

As you want order $2n$, you need at least $2n$ points if the integration formula were based on Lagrange interpolation. These points can be freely chosen. So select them so that for each point $x_k$ you take two points $x_k\pm\epsilon$. This $\epsilon$ only serves to go from the Lagrange interpolation to Hermite interpolation, it can be arbitrarily small independent of the configuration of the $x_k$.

Denote the usual Lagrange kernels for $x_1,...,x_n$ as $$ L_k(x)=\prod_{j=1,\,j\ne k}^n\frac{x-x_j}{x_k-x_j} $$ Then the interpolation for the set of $2n$ points is \begin{align} f(x)&=\sum_{k=1}^nf(x_k+ϵ)\frac{x-(x_k-ϵ)}{(x_k+ϵ)-(x_k-ϵ)}(L_k(x)^2+O(ϵ)) \\&~~~~~+f(x_k-ϵ)\frac{x-(x_k+ϵ)}{(x_k-ϵ)-(x_k+ϵ)}(L_k(x)^2+O(ϵ))+E_ϵ(x) \\ &=\sum_{k=1}^nf(x_k)L_k(x)^2+\sum_{k=1}^nf'(x_k)(x-x_k)L_k(x)^2+E(x)+O(ϵ) \end{align} where likewise $$ E_ϵ(x)=E(x)+O(ϵ), ~~~ E(x)=R(x)\prod_{k=1}^n(x-x_k)^2, $$ and $R(x)=f[x,x_1,x_1,x_2,x_2,...,x_n,x_n]$ is a generalized divided difference from the Newton interpolation formula. It is a continuous function in $x$ and can be represented as $$ R(x)=\frac{f^{(2n)}(\xi_x)}{(2n)!} $$


The aim of the Gauss integration rule is to not have the derivatives of $f$ in the formula. Thus the integral over these terms has to be zero $$ 0=\int_a^b (x-x_k)L_k(x)^2\,dx $$ This gives a system of $n$ polynomial equations for the $n$ numbers $x_1,...,x_n$. With some clever transformations one can describe the polynomial $P(x)=\prod_{j=1}^n(x-x_j)\sim (x-x_k)L_k(x)$ that has these numbers as roots. The above equation can be interpreted as that this polynomial is orthogonal to all $L_k$ in the standard $L^2$ scalar product. Thus it is orthogonal to all polynomials of degree less than $n$, which after a little consideration of the formula of partial integration gives $$P(x)=\frac{n!}{(2n)!}\frac{d^n}{dx^n}[((x-b)(x-a))^n].$$


Once this is solved, the quadrature rule now reads $$ \int_a^bf(x)\,dx=\sum_{k=1}^n w_kf(x_k)+\int_a^b R(x)P_n(x)^2\,dx =...+\frac{f^{(2n)}(\xi_c)}{(2n)!}\int_a^b P_n(x)^2\,dx \\~\\ w_k=\int_a^b L_k(x)^2\,dx $$ where $c$ is the midpoint from the mean value theorem for integrals and $\xi_c$ the midpoint from converting the divided difference into a derivative.