when can a function serve as its own correction for finding its roots via an iteration function? what is it called in this case?

79 Views Asked by At

What are the criteria such that a function $f(t)$ can serve as the correction in an iteration function of the form $g (t) = t - \lambda f (t)$ where $\lambda$ is some relaxation factor? It is almost reminiscent of Newton's iteration, without the derivative.

for instance, if $f(t)=sin(t)$ then $g(t)$ has attractive fixed-points at $t=n\pi$ when $n$ is even and repulsive fixed-points when $n$ is odd

if the iteration is $g (t) = t + \lambda f (t)$ then the attractiveness and repulsiveness of the fixed-points is swapped, so in the case of $f(t)=sin(t)$ it would be repulsive when $n$ is even and attractive when $n$ is odd

Do functions which have this property have a special name?

Perhaps the iteration converges to a root $f(t)=0$ where the derivative is $0<f'(t)<2$.

and iterating the function $t-Z(t)$ doesn't converge to any points where Z'(t) is not in [0,2] .. at least for the first 20 odd-numbers zeros I checked.. this table demonstrates

the column on the left is the difference between the starting point (the $2n-1$th zero -0.1 and the 50 iterations of the iteration function, and the column on the right is the derivative of $Z$ evaluated at the $2n-1$th zero

$ \left[ \begin {array}{cc} 0.0& 0.7931604332\\ 0.0 & 1.371721287\\ 0.0& 1.382119539 \\ 0.0& 1.490610763\\ 0.0& 1.568031477\\ - 1.11005001& 2.426579069 \\ 0.0& 1.391805619\\ - 0.38497400& 2.287779010\\ - 0.59958219& 2.186311017 \\ - 0.00000004& 1.779555993\\ - 0.98459094& 2.637886209\\ - 0.48377276& 2.161778835 \\ - 0.32348014& 2.176460788\\ 0.0& 1.479402184\\ - 1.37854250& 3.515767073 \\ - 0.3298843& 2.167414624\\ {\it Float}(\infty )& 2.982497202\\ 0.0& 1.361150829\\ - 14.4201635& 3.119005954 \\ - 0.3041748& 2.294939525\end {array} \right] $

I'm sure I would find the same thing with iteration function $t+Z(t)$ .

To "Fix" this, one can take $t-tanh(f(t))$ then the derivative can be no more than 2.. see does this Newton-like iterative root finding method based on the hyperbolic tangent function have a name?

It is the set of functions whose derivative at the roots is less than 2 and greater than 0. If anyone had a great idea on how to prove this....it is a conjecture based on the empirical fact that iterating this method with the Hardy Z function results in convergence when the derivative at the starting point is 0

1

There are 1 best solutions below

0
On

The crux of your method is the $\lambda$ factor. It is a variation of "trial and error". It is a perfectly valid method when used with caution. I don't know of any particular name for it but it is a very simple and natural method. The effectiveness of the method can be improved by adjusting the $\lambda$ factor depending on how close $f(t)$ is to zero and perhaps other data. One example of this is Newton's method as mentioned in a comment.