Assume that I have a twice differentiable function $f(x)$ which I try to maximize with respect to $x$ (let's say $x$ is $k$-dimensional vector). When performing optimization via Newton algorithm, assuming that the sequence of guesses $x_0, x_1, ... , x_n$ converges to correct value $\hat x$ does the following hold:
$||x_{p+1}-x_p|| \leq ||x_p - x_{p-1}||$,
where $0 \leq p \leq n $. What I am actually after is that can I use the above property (if it really holds like I assume it does) for checking if my algorithm is diverging i.e. if the my parameter estimate changes more than in the previous step, I might be overshooting and I backtrack so the previous condition holds.