Taylor approximation is not optimal

364 Views Asked by At

My professor gave a lecture on an orthogonal polynomial based approximation and its advantage over the Taylor series expansion. And his statement was ``in weighted $L_2$ space, Taylor series expansion is not optimal in inner product sense, whereas basis is optimal in the orthogonal polynomial approximation." Roughly I know that Taylor series approximation has some limitations such as function should be analytic or infinitely differentiable. But how Taylor series is not optimal in inner product sense?

Any suggestions towards finding the reason would greatly be appreciated.

2

There are 2 best solutions below

0
On BEST ANSWER

Let us call the weighted $L^2$ space $W$, assume all the polynomials belong to $W$ and let $\Pi_n$ denote the subspace of polynomials of degree $n$ or less.

The essential point to note is that any $p \in \Pi_n$ is the closest approximation for $f \in W$ (in the sense of the $W$ norm) if and only if $f-p$ is orthogonal $\Pi_n$. If $p_0, p_1, \cdots, p_n, \cdots$ are the $W$ orthogonal polynomials each with unit norm, then the $n$th order approximation \begin{align*} f_n = \sum_{k=0}^n \langle f, p_k \rangle p_k \end{align*} is optimal because $p_1, \cdots p_n$ spans $\Pi_n$ and $f-f_n$ is easily verified to be in $\Pi_n^\perp$. Nor is it hard to see that such an expansion is unique. In particular the $n$th degree Taylor series cannot be any better. Note it might be no worse either, an obvious case being when $f$ is itself a polynomial of degree less than or equal to $n$.

The key result is: if $H$ is an inner product space, $x \in H$ and $V$ a subspace of $H$, then $x \in V^\perp$ if and only if $$ \lVert v-x \lVert \geqslant \lVert x \rVert$$ for every $v \in V$. To prove it, first, assume $x \in V^\perp$, then for any $v \in V$, \begin{align*} \lVert v-x\rVert^2 &= \langle v-x, v-x\rangle \\ &= \lVert v \rVert^2 + \lVert x \rVert^2 \\ &\geqslant \lVert x \rVert^2 \end{align*} Conversely if $\lVert v -x \rVert \geqslant \lVert x \rVert$ for all $v \in V$ then for any $u \in V$ and $\alpha \in \mathbb C$, we also have $\alpha u \in V$ and \begin{align*} \lVert x \rVert ^2 &\leqslant \lVert x - \alpha u \rVert ^2 \\ &= \lVert x \rVert x ^2 - \alpha \langle u, x \rangle - \overline{\alpha \langle u, x \rangle} +|\alpha|^2 \lVert u \rVert^2 \\ &= \lVert x \rVert^2 - 2 \Re \Big( \alpha \langle u, x\rangle \Big) + |\alpha|^2 \lVert u \rVert ^2 \tag{1}\label{BPA-1} \end{align*} Now choose $\theta$ so that $e^{i\theta}\langle u, x \rangle $ is real and positive and for any $r > 0$ let $\alpha = re^{i\theta}$. Then cancel $\lVert x \rVert^2$ on each side of inequality \eqref{BPA-1}, then divide by $r > 0$, to get \begin{align*} 2 \lvert \langle u, x \rangle \rvert \leqslant r \lVert u \rVert^2. \end{align*} Now $r$ is arbitrary, so we must have $\langle u, x \rangle = 0$ and $u \in V^\perp$.

0
On

As a simple explanation why: The Taylor polynomial gives you the best possible approximation at one single point, and the further away you are from this point, the worse the approximation. The Taylor polynomial doesn’t even try to keep the error low when you are away from that point.

Polynomials that find an approximation minimising some norm tend to keep the error down over a whole range of values. They have to, since the errors over the whole range contribute to the error norm that is minimised.

Interpolating polynomials can be trouble if you don’t watch out at which points you interpolate. A method that is mostly numerical is minimising the maximum error over an interval; Chebyshev has a nice theorem for that. If you do numerical mathematics and want the highest possible numerical precision, you minimise the sum of polynomial error and rounding error. And you take into account that your polynomial coefficients should be floating point numbers.