estimation of error in numerical methods

146 Views Asked by At

I need to implement the Log function using Taylor series, and I need that the error will be smaller than epsilon, which is input by the user. How can I estimate the error in each iteration and validate when it is smaller than such epsilon? generally, how can I estimate the error in numerical methods - for example - how can I estimate the error using Newton-Raphson method?

thanks.

2

There are 2 best solutions below

0
On

Taylor's theorem gives you an error bound in the form of a derivative of the function. You can bound the error by bounding this term. For example for the Taylor series of $\sin(x)$ around $0$ we have $$\sin(x)=x-\frac{x^3}6+\frac 1{24}f^{iv}(a)x^4$$ where $a$ is between $0$ and $x$. Since all the derivatives of the sine function are bounded by $1$ we can ignore this term and have an error no greater than $\frac {x^4}{24}$. If we restrict $x$ to less than $0.1$ in absolute value this is rather small. When the alternating series theorem applies, as it does here once the terms start to decrease, the error is smaller than the first neglected term and of the same sign.

0
On

If you are using:

$$\log(1+x)\approx f_N(x)=\sum_{n=1}^N \frac{(-1)^{n+1} x^n}{n}$$

Then this makes for a good error estimate $$|f_{2k-1}-f_{2k}|$$

You might be interested in the algorithm described in this question: Prove an algorithm for logarithmic mean $\lim_{n \to \infty} a_n=\lim_{n \to \infty} b_n=\frac{a_0-b_0}{\ln a_0-\ln b_0}$.

In this case the error estimate for $\log a_0-\log b_0$ would be simply $$\left|\frac{a_0-b_0}{a_n}-\frac{a_0-b_0}{b_n}\right|=\left|\frac{(a_0-b_0)(a_n-b_n)}{a_n b_n}\right|$$