My aim is to find the value of $\log_e 1.2$ correct to $7$ decimal places using the Taylor's series
$$\log_e(1+x)=x-\frac{x^2}{2}+\frac{x^3}{3}-\cdots$$
For $7$ decimal places accuracy, the absolute error term $R_n(0.2)< 0.5×10^{-7}$
$1$. Now if I take $R_n=\frac{x^n}{n}$ where $n\ge 1$ then $n=9$ which means :
$ a. $ Only $9$ terms are sufficient or,
$b. $ Only $8$ terms are sufficient (as $n=9$ contributes to the error term).
$2$. And if I take $R_n=\frac{x^{n+1}}{n+1}$, where $n\ge 0$ then $n=9$ which means:
$a.$ only $n=9$ terms are sufficient, or
$b.$ only $9$ terms are sufficient (as $n=9$ contributes to the error term so terms corresponding to $0\le n\le 8$ i.e. $9$ terms are required).
Added: I am completely confused in reaching the correct conclusion, but I think after Michael Hardy's answer that $2.b$ is correct. Kindly tell which one is correct. Thanks in advance.
In a series alternating between positive and negative terms, with the absolute values of the terms decreasing to $0,$ the error after summing the first several terms is always smaller than the absolute value of the next term. It's easy to prove that; see if you can do that.
So \begin{align} & 0.2 - \frac{0.2^2} 2 + \frac{0.2^3} 3 - \frac{0.2^4} 4 + \cdots \\[10pt] = {} & 0.2 - 0.02 + \frac{0.008} 3 + \text{error} \\[10pt] \text{and error} < {} & \frac{0.2^4}{4} = \frac{0.0016} 4 = 0.0004. \end{align} Now keep going until the error is small enough.