I was given this least squares problem to solve:
Find a linear function $\ell(x)$ such that $\displaystyle\int_0^1(e^x-\ell(x))^2{\rm d}x$ is minimized.
As an answer, I got $\ell(x)=0.5876+0.5519x$, which I am pretty sure but not positive that it is right. I am supposed to also find the approximate function $\ell(x)$ two other ways, which is where I need help.
a) Find $\ell(x)$ such that $\int_0^1|e^x-\ell(x)|{\rm d}x$ is minimized. (How do I even integrate an absolute value?)
b) Find $\ell(x)$ such that $\displaystyle\max_{0\leq x \leq 1}|e^x-\ell(x)|$ is minimized.
About the main problem, since $$ \int_{0}^{1} e^{x}\,dx = (e-1), \qquad \int_{0}^{1} e^{x} P_1(2x-1)\,dx = (3-e) $$ the best $L^2$ approximation is given by $\color{red}{(4e-10)+(18-6e)x}$ as already pointed by Winther in the comments. About $(a)$ and $(b)$, given the convexity of $e^x$ it is quite trivial that the best $L^1$ and $L^\infty$ approximations are given by two lines through $(x_1,e^{x_1})$ and $(x_2,e^{x_2})$ with $0<x_1<x_2<1$.
$L^1$ case: let $g(x) = \frac{1}{x_1-x_2}\left(e^{x_2}(x_1-x)-e^{x_1}(x_2-x)\right)$ our candidate best approximation.
$$ \int_{0}^{1}\left| e^{x}-g(x)\right|\,dx = \int_{0}^{1}(e^x-g(x))\,dx +2\int_{x_1}^{x_2}(g(x)-e^{x})\,dx $$ gives us a (horrible) function of $x_1,x_2$ to minimize.
$L^{\infty}$ case: let $g(x) = \frac{1}{x_1-x_2}\left(e^{x_2}(x_1-x)-e^{x_1}(x_2-x)\right)$ our candidate best approximation.
$f(x)-g(x)$ is a convex function with a stationary point at $\log\left(\frac{e^{x_2}-e^{x_1}}{x_2-x_1}\right)$. We have to solve: $$ f(0)-g(0)=\frac{e^{x_2}-e^{x_1}}{x_2-x_1}-g\left(\log\left(\frac{e^{x_2}-e^{x_1}}{x_2-x_1}\right)\right)=f(1)-g(1). $$
By letting $g(x)=a+bx$ and imposing $f(0)-g(0)=f(1)-g(1)$ we get $b=e-1$. So, if the slope of $\ell$ is $e-1$, the stationary point is located at $\log(e-1)$ and $a$ can be found by solving: $$ 1-a = a+(e-1)\log(e-1)-(e-1) $$ and finding: $$\color{red}{ g(x) = \frac{e+(1-e)\log(e-1)}{2}+(e-1)x}.$$
By trial and error we may check that the $L^2$ best approximation is not so bad at all as a $L^\infty$ approximation (as predicted by Chebishev's theorems), and that the error of the best $L^\infty$ approximation is around $\frac{1}{10}$. Probably the exercise is intended as a lesson on how practical the least square method is, and how beautiful $L^2$ is, compared to $L^1$ and $L^{\infty}$.