I need to implement the Log function using Taylor series, and I need that the error will be smaller than epsilon, which is input by the user. How can I estimate the error in each iteration and validate when it is smaller than such epsilon? generally, how can I estimate the error in numerical methods - for example - how can I estimate the error using Newton-Raphson method?
thanks.
Taylor's theorem gives you an error bound in the form of a derivative of the function. You can bound the error by bounding this term. For example for the Taylor series of $\sin(x)$ around $0$ we have $$\sin(x)=x-\frac{x^3}6+\frac 1{24}f^{iv}(a)x^4$$ where $a$ is between $0$ and $x$. Since all the derivatives of the sine function are bounded by $1$ we can ignore this term and have an error no greater than $\frac {x^4}{24}$. If we restrict $x$ to less than $0.1$ in absolute value this is rather small. When the alternating series theorem applies, as it does here once the terms start to decrease, the error is smaller than the first neglected term and of the same sign.