Polynomials with a fixed maximum degree cannot be used to approximate functions uniformly to any desired accuracy?

596 Views Asked by At

Let $C[a,b]$ denote the set of all continuous, real-valued maps on the interval $[a,b]$. Let $P_n$ denote the set of all real polynomials on $[a,b]$ which have a maximum degree of $n$. Let $P=\cup_{n\geq 1}P_n$.

Then Weierstrass's Approximation theorem says that the closure of $P$ in $C[a,b]$ (with the uniform norm) is the entirety of $C[a,b]$. This would seem to hint that $P_n$ is not dense in $C[a,b]$. And indeed, one can show that $P_n$ is a closed and (obviously) proper subset of $C[a,b]$ which would mean that for any $n\in\mathbb{N}$ and any $\epsilon>0$ there exists an $f$ in $C[a,b]$ such that $$ \inf_{p\in P_n}|p-f|_\infty\geq\epsilon\,. $$ Polynomials of degree $n$ are determined by their values at $n+1$ distinct points. Thus, this result basically says that if we are limited to only being able to interpolate a function with polynomials at a maximum number of points, there will still exist continuous functions that we cannot approximate well.

Can one give a constructive proof of this fact? In other words, given $n\in\mathbb{N}$ and $\epsilon>0$ can we explicitly construct a function $f$ such that $$ \inf_{p\in P_n}|p-f|_\infty\geq\epsilon\,? $$ For bonus points, can we construct $f$ to be a polynomial of degree $n+1$?

For the first question, I expect a function with enough oscillation will accomplish this goal. And perhaps something as simple as $\bar{\epsilon}\sin(\bar{n}x)$ for appropriate $\bar{\epsilon}$ that depends on $\epsilon$ and $\bar{n}$ depending on $n$.

2

There are 2 best solutions below

0
On BEST ANSWER

First, a comment: your thinking is kind of backwards. Since $P_n$ is closed in $C[a,b]$, any continuous function $f$ that is not a polynomial of degree $\leq n$ is the example you seek, for appropriate $\epsilon$. If you want $\epsilon$ to be given ahead of time, that just means you have to scale up $f$ to fit your $\epsilon$. In other words, you just need to find an explicit lower bound for $\inf_{p\in P_n}\|p-f\|_\infty$ so you know how much you need to scale $f$ by to turn the lower bound into $\epsilon$.

One simple way to find such an explicit bound is using the $L^2$ norm. Given any $f\in L^2[a,b]$, you can compute the projection $g$ of $f$ onto the orthogonal complement of $P_n$ using Gram-Schmidt (and $g$ will be nonzero unless $f\in P_n$). We then know that $$\|p-f\|_2\geq\|g\|_2$$ for any $p\in P_n$. Converting this into the sup norm, we see that $$\|p-f\|_\infty\geq(b-a)^{-1/2}\|g\|_2$$ for all $p\in P_n$. By scaling $f$ we can change the constant on the right side of this inequality to be anything you want (since $g$ scales in proportion to $f$).

In particular, if you start with $f(x)=x^{n+1}$, this gives an algorithm to explicitly compute a positive lower bound for $\|f-p\|_\infty$ for all $p\in P_n$.

4
On

A Broad Hint, but not a complete answer

Suggestions:

  1. Simplify to $a = 0, b = 1$, or $a = -1, b = 1$ to make the notation nicer.

  2. You might want to look at something like $f(x) = (x-a)^{n+1}$. Degree-$n$ polynomials won't be able to fit this very well. There'll be some minimum value $h$ of $u(p) = \max_{x \in [a,b]} |f(x) - p(x)|$ over all $p \in P_n$. To get $h > \epsilon$, multiply my $f$ by $\frac{\epsilon}{h}$.

Let's do that explicitly for $n = 0$. Pick $a = 0, b = 1$. So $$ f(x) = x. $$ Approximating with with $p_c(x) = c$ in $P_0$ gives a min error of at least $1/2$ for every choice of $c$. So to make the error larger than epsilon, multiply $f$ by $\frac{\epsilon}{\frac12 \epsilon}$ to get $$ f_\epsilon = 2\epsilon x. $$

Of course, the minimization is messier as $n$ grows, but I'll bet that if you do the cases $n = 1, 2$, you'll start to get a general answer.