I'm a little bit lost here.
I'm in need of showing that $f(x)= \ln(1 - x)$ is equal to $$ -\sum_{t=1}^{\infty}\frac{x^{t}}{t} $$ for all $x\in (-1,1)$.
I know how to show that the $n$-th degree Taylor polynomial of $f$ about $x=0$ is equal to $$ T_n(x)=-\sum_{t=1}^{n}\frac{x^{t}}{t},$$ but I don't know how to show that the error in this approximation goes to zero as $n\rightarrow \infty $. I tried to use the Lagrange remainder like this:
*We know that $f$ and all its derivatives are continous on any interval $[a,x] \subset (-1,1)$. Then there is a number $c\in (a,x)$ such that the error in the $n$-th degree taylor approximation is equal to $$ E(x)=\frac{f^{(n+1)}(c)}{(n+1)!}(x-a)^{n+1} $$ Now, $$ f^{(n+1)}(x)=\frac{(-1)^{n+2}n!}{(x-1)^{n+1}}\forall n \in \Bbb N $$ which gives the expression: $$ E(x)=\frac{(-1)^{n+2}n!}{(n+1)!(c-1)^{n+1}}(x-a)^{n+1}=\frac{(-1)^{n}(x-a)^{n+1}}{(n+1)(c-1)^{n+1}} $$ Now, the $(-1)^n$ part of this expression is of no interest, as we are only interested in the size of this error. This means that we need to prove that $$ \lim_{n\to \infty}\frac{1}{n+1}[\frac{x-a}{c-1}]^{n+1}=0 $$ Unfortunately, I can't find any argument that it does. My thoughts are that we can chose $a$ and $x$ such that $a-x$ is as close to $2$ as we want, and the number $c$ might be really close to $1$, such that the term $\frac{x-a}{c-1}$ is greater than one and hence explodes to infinity as $n$ gets large.
Can anybody tell me where my reasoning breaks down, and if there is any better way to do this?
The point here is that you don't have to take $a$ and $x$ close to $-1$ and $1$ simultaneously. Quite the opposite, you can take your time and pick $a, x$ as close to each other as you like, as soon as your argument works for a number of intervals that covers all of $(-1,1)$ in the end.
If $x\geq 0$, take $a<x$ such that $|x-a|<|1-x|$. Therefore in the end, your quantity above does tend to $0$, which proves your equality on that interval.
The same trick works on the other side (close to $-1$).
An easier way since you asked, would be to use the normal (hence uniform) convergence of the derivative on any interval $[a,b]\subset (0,1)$ and the fact that this allows you to integrate term by term.
The derivative of the series on the right-hand side is the well-known Taylor expansion of $\dfrac{1}{1-x}$ (or rather its negative), whose primitive that vanishes at $x=1$ is exactly the function in your left-hand side.