Proof that you can approximate any continuous function using rectangles/step functions within a small error

1.9k Views Asked by At

Proof that rectangles or a combination of step functions can approximate any continuous function within a small $\epsilon$ which represents the error between the approximate step function and continuous function.

Does such a proof exists? Or combination of proofs? I'm trying to write a mathematical proof that neural networks can approximate any continuous function within a small epsilon because they can create a Riemann sum of any finite small size and therefore can approximate a continuous function.

1

There are 1 best solutions below

2
On BEST ANSWER

Let $f$ be a continuous function defined on $[0,1]$. Since $[0,1]$ is compact, $f$ is uniformly continuous. Let $\varepsilon>0$ be given. Then there exists $\delta>0$ such that $|f(x)-f(y)|<\varepsilon$ whenever $|x-y|<\delta$. Now, subdivide the interval $[0,1]$ into $n$ subintervals of equal length, with $n$ sufficiently large such that $\frac1n<\delta$. Then take $g$ to be the piecewise constant function defined by $$ g(x)=f([nx]/n). $$ I claim that $$ \sup_{x\in[0,1]}|f(x)-g(x)|<\varepsilon. $$ Indeed, for $x\in[\frac{k}n,\frac{k+1}n)$ with $k=0,1,\ldots,n$, we have $[nx]/n=k/n$, and so $$ |f(x)-g(x)|=|f(x)-f([nx]/n)|=|f(x)-f(k/n)|<\varepsilon, $$ since $|x-\frac{k}n|<\delta$.