Least Squares Alternates- approximating functions

97 Views Asked by At

I was given this least squares problem to solve:

Find a linear function $\ell(x)$ such that $\displaystyle\int_0^1(e^x-\ell(x))^2{\rm d}x$ is minimized.

As an answer, I got $\ell(x)=0.5876+0.5519x$, which I am pretty sure but not positive that it is right. I am supposed to also find the approximate function $\ell(x)$ two other ways, which is where I need help.

a) Find $\ell(x)$ such that $\int_0^1|e^x-\ell(x)|{\rm d}x$ is minimized. (How do I even integrate an absolute value?)

b) Find $\ell(x)$ such that $\displaystyle\max_{0\leq x \leq 1}|e^x-\ell(x)|$ is minimized.

2

There are 2 best solutions below

0
On BEST ANSWER

About the main problem, since $$ \int_{0}^{1} e^{x}\,dx = (e-1), \qquad \int_{0}^{1} e^{x} P_1(2x-1)\,dx = (3-e) $$ the best $L^2$ approximation is given by $\color{red}{(4e-10)+(18-6e)x}$ as already pointed by Winther in the comments. About $(a)$ and $(b)$, given the convexity of $e^x$ it is quite trivial that the best $L^1$ and $L^\infty$ approximations are given by two lines through $(x_1,e^{x_1})$ and $(x_2,e^{x_2})$ with $0<x_1<x_2<1$.

$L^1$ case: let $g(x) = \frac{1}{x_1-x_2}\left(e^{x_2}(x_1-x)-e^{x_1}(x_2-x)\right)$ our candidate best approximation.

$$ \int_{0}^{1}\left| e^{x}-g(x)\right|\,dx = \int_{0}^{1}(e^x-g(x))\,dx +2\int_{x_1}^{x_2}(g(x)-e^{x})\,dx $$ gives us a (horrible) function of $x_1,x_2$ to minimize.

$L^{\infty}$ case: let $g(x) = \frac{1}{x_1-x_2}\left(e^{x_2}(x_1-x)-e^{x_1}(x_2-x)\right)$ our candidate best approximation.

$f(x)-g(x)$ is a convex function with a stationary point at $\log\left(\frac{e^{x_2}-e^{x_1}}{x_2-x_1}\right)$. We have to solve: $$ f(0)-g(0)=\frac{e^{x_2}-e^{x_1}}{x_2-x_1}-g\left(\log\left(\frac{e^{x_2}-e^{x_1}}{x_2-x_1}\right)\right)=f(1)-g(1). $$

By letting $g(x)=a+bx$ and imposing $f(0)-g(0)=f(1)-g(1)$ we get $b=e-1$. So, if the slope of $\ell$ is $e-1$, the stationary point is located at $\log(e-1)$ and $a$ can be found by solving: $$ 1-a = a+(e-1)\log(e-1)-(e-1) $$ and finding: $$\color{red}{ g(x) = \frac{e+(1-e)\log(e-1)}{2}+(e-1)x}.$$

By trial and error we may check that the $L^2$ best approximation is not so bad at all as a $L^\infty$ approximation (as predicted by Chebishev's theorems), and that the error of the best $L^\infty$ approximation is around $\frac{1}{10}$. Probably the exercise is intended as a lesson on how practical the least square method is, and how beautiful $L^2$ is, compared to $L^1$ and $L^{\infty}$.

0
On

(a.) I would assume that $l(x)$ is going to intersect $y = e^x$ at two points, $0 < u < v < 1$. In which case \begin{align} l(x) &= \dfrac{e^v - e^u}{v - u}(x - u) + e^u \\ &= m x + b\\ \hline \text{Where} \\ & m = \dfrac{e^v - e^u}{v - u}\\ & b = \dfrac{v e^u - u e^v}{v - u}\\ \end{align}

Then I would try to minimize \begin{align} \int_0^1|e^x-l(x)|dx &= \int_0^a(e^x-l(x))dx - \int_a^b(e^x-l(x))dx + \int_b^1(e^x-l(x))dx\\ \end{align}

(b.) Our error function is $E(x;m,b) = e^x - mx - b$. It isn't hard to show that

$\min_{x \in [0,1]} E(x;m,b) = E(\ln m;m;b) = m - m \ln m - b$

We must then arrange things so that

$\max_{x \in [0,1]} E(x;m,b) = E(0;m,b) = E(1;m,b)$

So, we need to solve $1 - b = e - m - b = b + m \ln m - m\;$ for $\;m$ and $\;b$.

I got

$m = e - 1 \approx 1.72$

$\; b = \dfrac 12(e - (e - 1) \ln(e-1)) = 0.89$