Suppose we're given a function $f(x)=e^{-x^2}$ and we want to make a piecewise linear interpolation on the interval $[0,10]$ under the following constraints:
- We divide the interval $[0,10]$ into two intervals $[0,a]$ and $[a,10]$ and make equidistant partitions into $n_1$ and $n_2$ intervals respectively.
- We're looking for a given (global) accuracy of $\varepsilon=10^{-2}.$
- $n_1+n_2\le 12.$
My thoughts:
I took a look at the function $f(x)=e^{-x^2}$ restricted to $[0,10].$ I computed the first three derivatives $$\begin{aligned}f'(x)&=-2xe^{-x^2}\\f''(x)&=(4x^2-2)e^{-x^2}\\f'''(x)&=-4x(2x^2-3)e^{-x^2}\end{aligned}$$ $f$ is decreasing on $[0,10]$ and has a saddle point at $x=\frac1{\sqrt 2}.$ From the lectures, I know, if we're interpolating a function $f\in C^2([a,b]),$ with a linear spline $\varphi,$ there is the following estimation: $$|f(x)-\varphi(x)|\le\frac18h^2M_2,$$ where $h$ is the maximum distance of the nodes and $M_2:=\max\limits_{[a,b]}|f''(x)|.$ In our case, $$|f(x)-\varphi(x)|\le\frac18\max\left\{\frac{a}{n_1},\frac{10-a}{n_2}\right\}^2\underbrace{\left|f''(\sqrt{3/2})\right|}_{\frac4{e^{3/2}}}\le\varepsilon=10^{-2}.$$
I'm not sure, though, what $a$ should be. If we look at $f$ on $\Bbb R,$ it has a horizontal asymptote $y=0$ (since $\lim\limits_{x\to\infty}f'(x)=0$ and $\lim\limits_{x\to\infty}f(x)=0$) so, if we choose some $a\ge\frac1{\sqrt 2},$ maybe we would need less nodes to more accurately interpolate $f_{\Big|[a,10]}$ as it seems to approach the $x$-axis, but I don't quite know how to argue that.
Any advice or guidance would be appreciated.


Look at the graph of $f’’$ over $[0,10]$. Choose short intervals (large values of $n_1$ and $n_2$) in regions where $f’’$ is large, and longer ones where it’s small. The idea is to make the product $h^2M_2$ roughly constant.
Other than the fuzzy advice I gave above, I don’t know how to approach this mathematically. But it should be easy to investigate experimentally using a program like Mathematica.
If you really want to be rigorous, you can set up an optimization problem where $a$ and $n_1$ and $n_2$ are the independent variables, and the approximation error is the objective function to be minimized. Then, just throw the problem to your favorite optimization procedure.