Suppose that $x_0,x_1,\dots$ are the successive approximations of a solution of some iterative scheme (can think of successive approximations in Newton-Rapson method in finding root of a non-linear equation, for example). In general the following stopping criteria are used in approaching the root.
- $|x_{n+1}-x_n|<\epsilon$ (absolute error)
- $|f(x_n)|<\epsilon$
- $\frac{|x_{n+1}-x_n|}{|x_n|}<\epsilon$. (relative error)
I can understand the first two why they are used, how they work, when they fail, and so on. However, in books it is said that the third criterion is better than the first two. However, I am not able to see (i) why the third one works and (ii) how that is better than the first two.
Any explanation on this is greatly appreciated.
The third has the advantage that unless you set $\epsilon$ too small you are (essentially) guaranteed to have values that will meet it. This is about computer numbers, not mathematical numbers. If you have $53$ bits in the mantissa of your floating point number, changing the lowest bit will make a relative change of about $2^{-53}$ in the number. The only way this can fail is if you are at the end of the exponent range for your representation.
By contrast, for specific functions there may not be any $x$ values that satisfy the first two. If you set $\epsilon=2^{-20}$ in the first and $x$ is of order $2^{40}$ you cannot satisfy it unless $x_{n+1}=x_n$. For the second if $f(x)$ is changing rapidly near the root there may not be any $x$ values that give a value that small.
This does not mean that the third is necessarily better than the first two. If you know something about $f$ you can analyze the problem and convince yourself that one of the first two will work.