In the context of root finding, it is often stated that the bisection method is slower than Newton's method due to linear convergence. However, I am trying to understand why this is the case from an algorithmic time complexity viewpoint.
My understanding is that the time complexity of Newton's method is $O(log(N)F(N))$ where $F(N)$ is the cost of calculating $\frac{f(x)}{f'(x)}$ with $N$-digit precision. But given the architecture of the bisection method, which halves the search interval at each iteration, I was under the impression that its time complexity was also logarithmic. I was therefore wondering whether anyone could shed some light on why the bisection method is slower than Newton's method from a complexity point of view? Any comments or references to literature would be greatly appreciated.
Per every bit you need one bisection step, so the cost is $\sim N·eval(f)$, while Newton is, as you wrote, $\sim 3·\log(N)·eval(f)$, where it is used that $eval(f,f')=3·eval(f)$ if automatic/algorithmic differentiation is used.