So, I'm trying to self-learn Analysis, and I don't have any solutions, so I hope you don't mind if I put my answer here for you guys to help me check it, as it seems I haven't solved it correctly.
Basically, what we have to prove, is that $1+\frac{1}{1!}+\frac{1}{2!}+\ldots+\frac{1}{n!}+\frac{1}{n!n}$ is a "much" better approximation than $1+\frac{1}{1!}+\ldots+\frac{1}{n!}$.
Now, here's my solution:
$0<s_n-e=\frac{1}{n!n}-\frac{1}{(n+1)!}-\ldots=\frac{1}{n!}\left(\frac{1}{n}-\frac{1}{(n+1)}-\frac{1}{(n+1)(n+2)}-\ldots\right)<\frac{1}{n!}\frac{1}{n(n+1)}<\frac{1}{(n+1)!n}$
The error $\frac{1}{(n+1)!n}$ is less than $\frac{1}{n!n}$ and thus we got a better approximate.
The problem is, the approximation barely seems any better, and not "much better" as the book says. Did I do something wrong? Also, here's what we proved in the previous two parts, if it's relevant:
a) $1+\frac{1}{1!}+\ldots+\frac{1}{n!}+\frac{1}{n!n}=3-\frac{1}{1\cdot 2\cdot 2!}-\ldots-\frac{1}{(n-1)\cdot n\cdot n!}$
b) $e=3-\sum_{n=0}^\infty \frac{1}{(n+1)(n+2)(n+2)!}$
Thanks in advance.
It is a much better approximation than the Taylor approximation $t_n=1+\dfrac1{1!}+\dotsm+\dfrac 1{n!}$ in a precise sense: the error $\,s_n-\mathrm e$ gets smaller and smaller comparatively to the error $\,t_n-\mathrm e$, i.e. $$\lim_{n\to\infty}\frac{s_n-\mathrm e}{t_n-\mathrm e}=0 $$
Indeed, as you noticed, $\,0 <s_n-\mathrm e<\dfrac 1{(n+1)!\,n}$ and $$ t_n-\mathrm e =\dfrac 1{(n+1)!}+\dfrac 1{(n+2)!}+\dotsm > \dfrac 1{(n+1)!}$$ hence $$ 0 <\frac{s_n-\mathrm e}{t_n-\mathrm e} < \frac 1n.$$ In
Landau notation, we have $s_n-\mathrm e =o\mkern1.5mu(t_n-\mathrm e)$. One also says that $s_n-\mathrm e$ is $\,$dominated$\,$ by $t_n-\mathrm e$.