I have a costly scalar black box function $f(x)$ and want to find the root $x_0$ with $f(x_0) = 0$ of that function.
Several root finding algorithms I have seen so far (e. g. brent, ridder, toms748) limit the accuracy of the solution by how much $x_{result}$ is away from $x_0$.
For me it is much more interesting to know how much $f(x_{result})$ differs from $0$. There are some algorithms that offer that (like: broyden1 scipy / wiki), but it seems to me that the focus lies on setting the limits to $x_{result}$ and less to $f(x_{result})$.
My first question is whether there are variations of brent or ridder that allow to limit the optimization by $f(x_{result})$?
My second question is whether somebody has experiences or knows references where I can find the (dis)advantages of those algorithms that limit by $f(x_{result})$?
The simplest reason why terminating based on $\|f(x)\|$ is not so used is because you can easily scale your function with $\alpha f(x)$ for arbitrary $\alpha\ne0$, which will easily fool root-finders on when they should terminate. In most cases, this is simply an unreliable condition for termination.
Usually, this shouldn't matter anyways. For simple roots, you should find that $\|f(x)\|$ scales as fast as $\|x-x_\star\|$. For bracketing methods, a tolerance should usually be used to prevent $x$ from changing by less than a minimum amount. This way, you should not have one bound of the root converging rapidly and the other bound of the root stagnant, preventing $\|x_1-x_2\|<\epsilon$ from occurring.
If you do insist on using $\|f(x)\|<\epsilon$, then it usually suffices to redefine $f$ as follows:
$$\hat f(x)=\begin{cases}0,&\|f(x)\|<\epsilon\\f(x),&\text{else}\end{cases}$$
Most root-finders will terminate once they detect a root is found, so they can be fooled to terminate this way with any custom condition you desire.