Generalisation of Stone-Weierstrass Theorem / Fourier series for linearly independent functions.

208 Views Asked by At

In mathematical analysis, the Weierstrass approximation theorem states that every continuous function defined on a closed interval $[a, b]$ can be uniformly approximated as closely as desired by a polynomial function.

In Fourier analysis, if $f$ satisfies the three Dirichlet conditions, that is:

(i) $f(x)$ must have a finite number of extrema in any given interval, i.e. there must be a finite number of maxima and minima in the interval.

(ii) $f(x)$must have a finite number of discontinuities in any given interval, however the discontinuity cannot be infinite.

(iii) $f(x)$ must be bounded.

Then $f(x)$ has a Fourier series that converges to it.

I am wondering, more generally, if something like or some version of the following general theorem is true:

If $(f_i)_{i\in \mathbb{N}}\ $ is a sequence of bounded, (continuous?) linearly independent functions, that is, for each $n\in\mathbb{N},\ \ c_1, c_2, \ldots, c_n,\ $ all in $\mathbb{R},$ such that, $\displaystyle\sum_{j=1}^{j=n} c_j f_j=0,\ $ then $\ c_i = 0\ \forall i\in \{1,2,\ldots,n\}.$ Then, if $\ f\ $ has the Dirichlet conditions or alternatively if $f$ is continuous, then $\ \exists\ (c_n)_{n\in\mathbb{N}}\in {\mathbb{R}}^{\mathbb{N}}\ $ such that $f \equiv \displaystyle\sum_{i=1}^{\infty} c_i f_i. $

Is there some well-known generalised theorem of this form? Or is it all nuanced, and depends on the specifics? Or am I being all weird and confused?

2

There are 2 best solutions below

0
On

Yes, there is a theorem that says something like what you want. As has been mentioned in the comments, it's called the Stone-Weierstrass Theorem, and it's true in very high generality. One of its corollaries, though, is this:

Fix a family $\mathcal{F}$ of functions $[a,b] \to \mathbb{R}$ so that

  1. $\mathcal{F}$ is closed under $+$, scalar multiplication, and $\times$
  2. $\mathcal{F}$ contains the constant $1$ function
  3. For any two points $x \neq y \in [a,b]$ there's an $f \in \mathcal{F}$ with $f(x) \neq f(y)$

Then every continuous function on $[a,b]$ can be uniformly approximated by functions in $\mathcal{F}$.

Notice that condition $1$ is closely related to your "linear independence" condition. It says that $\mathcal{F}$ should be a vector subspace of all continuous functions, and should moreover be closed under multiplication (we can say this quickly by saying "$\mathcal{F}$ is a subalgebra of the algebra of continuous functions"). This is a very mild strengthening of your "linear independence" condition that takes more general products of functions into account. For instance, if we define $\mathcal{F}$ by fixing a basis, then this is your linear independence condition with the ~bonus condition~ that the product of two basis elements lands in the span of that basis.

Condition $2$ says that your basis had better include the constant $1$ function, which is a sort of obvious requirement if you think about it. After all, think about the polynomials generated by $\{x, x^2, x^3, x^4, \ldots \}$. These all have $0$ constant term, and so for every such polynomial we have $p(0) = 0$. So how could we possibly approximate a function with $p(0) \neq 0$? This also provides an explicit answer to the question you asked Robert Israel in the comments, since $\{x, x^2, x^3, \ldots\}$ is linearly independent (and closed under multiplication too!) but "its span isn't dense" (you can't approximate arbitrary functions by elements of its span). But notice the constant $1$ function is in the set of polynomials (as $x^0$) and in the set of trigonometric polynomials (as $\cos(0x)$).

Condition $3$, finally, says that we can separate points. Imagine we took $\mathcal{F} = \{ \text{all polynomials $p$ so that } p(0) = p(1) \}$. It's worth checking that this is a subalgebra that contains the constant $1$ polynomial. So it satisfies conditions $1$ and $2$. Of course, in $\mathcal{F}$ we always have $p(0) = p(1)$! So how could we possibly approximate a function where $p(0) \neq p(1)$? We won't be able to! For an incredibly silly example, take $\mathcal{F} = \{ \text{just the constant functions} \}$. Again, this is a subalgebra containing $1$, but I think it's clear that you have no shot of approximating all continuous functions using these! (So again, we say that $\mathcal{F}$ is not dense in the space of all continuous functions).

What's incredible is that these three (obviously necessary) conditions are enough! The Stone-Weierstrass theorem says that if you have any family of functions that's a subalgebra, containing $1$, and separates points, you can approximate any continuous function using functions from that family! And this is true for continuous functions defined on spaces much more general than intervals $[a,b]$, as well as for codomains much more general than $\mathbb{R}$!

So to rephrase this in terms of a basis (so that it looks as similar to your proposed theorem as possible), the following is true:

Fix a sequence $(f_n)$ of linearly independent, continuous functions $[a,b] \to \mathbb{R}$ so that

  1. any product $f_i \cdot f_j$ lands in the span of the $(f_n)$
  2. $f_0 = 1$
  3. for any two points $x,y \in [a,b]$ there's an $n$ with $f_n(x) \neq f_n(y)$

Then every continuous function $f : [a,b] \to \mathbb{R}$ can be uniformly approximated by finite sums of the form $\sum_{k=0}^N c_k f_k$ for real coefficients $c_k$.


I hope this helps ^_^

0
On

TOO LONG FOR A COMMENT - NOT AN ANSWER

Let me point out a subtlety that the formulation of your question somewhat obscures, but that is very important to Harmonic Analysis and to Approximation Theory. If $f$ satisfies the Dirichlet conditions you mentioned, then letting $$ S_N f(x):=\sum_{\lvert n \rvert\le N} \widehat{f}(n)e^{2\pi i n x}$$ it holds that $$\tag{1} S_N f(x)\to f(x), \quad \text{ for all }x\in \mathbb T.$$ On the other hand, the Weierstrass theorem tells us that, if $f$ is continuous, then there is some sequence of polynomials $P_N$ such that $$\tag{2} P_N(x)\to f(x) \quad \text{uniformly for }x\in\mathbb T.$$ The Stone--Weierstrass theorem mentioned in the other answer follows the same lines as (2).

Note carefully: (1) contains a great deal more information than (2). In (1), the approximant is an explicit function of $f$, whereas in (2), the polynomials $P_N$ are completely unknown. The main goal of a field known as constructive approximation is precisely the improvement of implicit statements such as (2) to explicit statements such as (1). For the Weierstrass theorem, this has been accomplished by S.Bernstein and his Bernstein polynomials. These polynomials are definitely one of the biggest successes of 20th century approximation theory.

Harmonic Analysis is also very much concerned with explicit statements such as (1). Again, one of the most celebrated results of the 20th century is Carleson's theorem, which states that (1) holds for almost every $x\in\mathbb T$ provided $f\in L^2$. On the other hand, (1) can fail if $f$ is only assumed to be continuous, which is a well-known fact that can be found in textbooks. Note that a "trigonometric Weierstrass theorem", i.e. a result analogous to (2) but with trigonometric polynomials, is true for arbitrary continuous $f$.

I hope that all this information sheds some light on the subtle, but important, difference between (1) and (2).