Let $f$ be a continuous function over the interval $[a, b]$ such that $f(a)f(b) \leq 0$. Suppose $f(x)=0$ has greater than $1$ root in this interval. Which root does the bisection method eventually converge to, given that we don't stop at any point given an $\epsilon$ precision has already been established and that instead, we continue this bisection process infinitely many times? This problem is mentioned in the exercises of Elementary Numerical Analysis by Samuel Conte and De Boor.
My understanding of the problem is that if in an interval $[c, d]$ at any point, if the interval $\left [ c, \frac{c+d}{2}\right ]$ has even number of roots and both $c, \frac{c+d}{2}$ aren't roots of $f$, then $f(c)f\left ( \frac{c+d}{2}\right ) > 0$ and thus it skips this interval despite it having roots. Thus, it necessarily doesn't converge to the smallest root of $f$ in $[a, b]$. But if one of the left endpoints is a root, no matter how function $f$ behaves for $x > \text{left endpoint}$, it will always converge to that left endpoint, meaning there are indeed cases where it converges to the left endpoint (for example if $a$ is a root of $f$) I'm stuck at this problem and want to know the answer to it. Thanks!