Suppose $a<b,\quad f:(a,b)\to\mathbb{R}$ is a differentiable, nowhere-linear function$,\ f(c)=0 $ for some $c\in (a,b),\quad$ and $f'(x)\neq 0\ \forall\ x\in (a,b).\ $ Let $x_0=a;\ x_1=b.$ Nowhere-linear means $\ \not\exists\ t_1,t_2\ $ with $\ a\leq t_1 < t_2 \leq b\ $ s.t. $\ f(u) = \frac{(f(t_2)-f(t_1))(u-t_1)}{t_2-t_1} + f(t_1)\ \forall\ u\in (t_1,t_2);$ more colloquially, $ f$ is not a straight line on any interval.
For every $n\in\mathbb{N},$
- Let $ x_{n+1}$ be any point in the interval $(x_{n-1},x_n)$ such that $\ f'(x_{n+1}) = \frac{f(x_n) - f(x_{n-1})}{x_n - x_{n-1}}.$
- If $f(x_{n+1})f(x_n)>0$ discard $x_n$ and relabel $x_{n-1}$ as $x_n.$
Prove or disprove: $\displaystyle\lim_{n\to\infty}x_n = c.$
If yes, I would be interested to determine the rate of convergence for this method, and also if this method is of any practical use compared to Newton-Raphson. But the question is hard enough as it is, so let's tackle this question first.
Also, if we changed the first condition to, " 1. We are allowed to choose any $ x_{n+1}$ in the interval $(x_{n-1},x_n)$ with the property that $\ f'(x_{n+1}) = \frac{f(x_n) - f(x_{n-1})}{x_n - x_{n-1}}$ that we like," then this makes the question easier, because we can choose $x_{n+1}$ as close to the root as possible, although even then I don't know how to prove $\displaystyle\lim_{n\to\infty}x_n = c.$
Also, I wonder if either of these methods are some sort of Newton-Raphson in disguise?