Approximating the basis of a specific function

62 Views Asked by At

We are given a continuous function $g: A \to B $, where $A, B$ are compact subsets of $\mathbb{R}$.

We define a function $f(x) := g(b_1x)+g(b_2x)+...+ g(b_mx)$, where $b_i < 1$ and $b_ix$ is a scalar multiplication of $x$ by $b_i$. (Actually $b_i$ are much less than $1$).

I want to prove that we can approximate $f(x)$ with certain $k$ functions $[g(b_{j_1}x),g(b_{j_2}x),...g(b_{j_k}x)]$ from the original set, where $k$ is very small compared to $m$.

Here is an outline why I believe this is true:

By Stone-Weierstrass theorem, $g(x)$ can be approximated as close as we like by polynomials;

$g(x) \approx a_0+ a_1x+a_2x^2+...+a_nx^n$

Then,

$g(b_1x) \approx a_0 + a_1(b_1)x+a_2(b_1)^2x^2+...+a_n(b_1)^nx^n$

$g(b_2x) \approx a_0 + a_1(b_2)x+a_2(b_2)^2x^2+...+a_n(b_2)^nx^n$

$\vdots$

$g(b_mx) \approx a_0+a_1(b_m)x+a_2(b_m)^2x^2+...+a_n(b_m)^nx^n$

Then,

$f(x) \approx a_0\sum_{i=1}^m 1 + a_1x\sum_{i=1}^mb_i + ... + a_nx^n\sum_{i=1}^m(b_i)^n$

If we represent $f(x)$ and $g(b_ix)$ as column vectors:

$\begin{bmatrix} a_0\sum_{i=1}^m 1 \\ a_1\sum_{i=1}^mb_i \\ ... \\ a_n\sum_{i=1}^m(b_i)^n \end{bmatrix}$ $\begin{bmatrix} a_0 & a_0 & ...& a_0 \\ a_1b_1 & a_1b_2 & ... & a_1b_m\\ ... & ... & ... & ... \\ a_n(b_1)^n & a_n(b_2)^n &... & a_n(b_m)^n \end{bmatrix}$

I guess it is equivalent to consider:

$\begin{bmatrix} \sum_{i=1}^m 1 \\ \sum_{i=1}^mb_i \\ ... \\ \sum_{i=1}^m(b_i)^n \end{bmatrix}$ $\begin{bmatrix} 1 & 1 & ...& 1 \\ b_1 & b_2 & ... & b_m\\ ... & ... & ... & ... \\ (b_1)^n & (b_2)^n &... & (b_m)^n \end{bmatrix}$

Now, let me explain why I think that we need much less vectors from the right side to represent the sum on the left. Notice that $b_i < 1$ and $(b_i)^n$ goes to $0$ very fast.

Then, after some $k$, $(b_i)^k < \delta$, and we could say that there are effectively $k$ linearly independent vectors in $g(b_ix)$:

$\begin{bmatrix} 1 & 1 & ...& 1 \\ b_1 & b_2 & ... & b_m\\ ... & ... & ... & ... \\ (b_1)^k & (b_2)^k &... & (b_m)^k\\ 0 & 0 & ...& 0 \\ ... & ... & ... & ... \\ 0 & 0 &... & 0 \end{bmatrix}$

Then we only need $k$ of $g(b_ix)$ to span $\begin{bmatrix} a_0\sum_{i=1}^m 1 \\ a_1\sum_{i=1}^mb_i \\ ... \\ a_n\sum_{i=1}^m(b_i)^n \end{bmatrix}$

How can I use this logic to prove that we can choose such $g(b_ix)$, so that: $|f(x) - \sum_{i=1}^kc_ig(b_{j_i}x)| < \epsilon$ for all $x \in A$

I don't know exactly what $k$ is, but I guess it should be something like:

$mb^k < \epsilon, k > \frac{\log m - \log \epsilon}{ \log b}$

I am not sure that my conjecture is true, so I will be very grateful if you show mistakes in my reasoning

1

There are 1 best solutions below

3
On BEST ANSWER

The uniform norm makes approximability even (much) worse than in the vector projection case. Indeed, let $g_0:\Bbb R\to\Bbb R$ be a function such that $g(x)=4x-2$, if $1/2\le x\le 3/4$, $g(x)= 4-4x$, if $3/4\le x\le 1$, and $g(x)=0$, otherwise. Let $A=[0,1]$, $g(x)=g_0(2^mx)$ for each $x\in A$, and $b_i=2^{-i}$ for each $1\le i\le m$. Recall that a support $\operatorname{supp} h$ of a function $h$ is the set of $x$ such that $h(x)\ne 0$. The functions $g(b_ix)$ have mutually disjoint supports $(2^{i-m-1},2^{i-m}) $, so if $b_l$ is missed in $b_{j_i}$ then for $x=2^{m-l}\tfrac 34$ we have $f(x)=1$, whereas $\sum_{i=1}^kc_ig(b_{j_i}x)=0$.