Is it possible for the bisection method to provide "fake" zeros

2.8k Views Asked by At

I've read about the bisection method for finding roots of a function in my numerical analysis textbook and one question came to my mind.

Given a relatively complicated function, the chances of finding the exact root (that is, a root that is completely represented in the computer's memory, with all significant figures) are very low. This means that most of the time, we will have a value for which the function is very close to, but not exactly equal to zero.

So what would happen if the function had one root, and another value at which the function gets extremely close to zero, without actually getting there? Would the algorithm fail? And what is the meaning of that eventual "fake" root; is it worth anything?

Thank you.

EDIT: here is a picture showing what I meant

4

There are 4 best solutions below

0
On BEST ANSWER

The bisection method only cares if the function changes sign, so it goes straight past the 'fake' root without noticing.

If the coefficients have a slight error in them, then perhaps the 'fake' root should have been a root.

2
On

As long as you evaluate $f\left(\frac {a+b}2\right)$ to something greater than zero, the method will work fine. It will replace $a$ with $\frac {a+b}2$ as the left end of the interval. The usual termination criterion for bisection and other bracketing approaches is the length of the interval, not the function value. It would keep going, knowing that there is a root somewhere in the interval because the function values at the endpoints have opposite signs.

6
On

You have to be aware that the bisection method finds a point with a sign change in the values of the numerical evaluation of your function. Due to catastrophic cancellation that are unavoidable to get small values close to a root, this can give wide errors even for simple roots. Take for instance the rescaled Wilkinson polynomial $p(x)=\prod_{k=1}^{20}(x-k/10)$ in double floating point precision, after multiplying it out as $x^{20}-21x^{19}+a_{18}x^{18}+...+a_1x+a_0$. Around the root $x=1.9$ the numerical evaluation of a larger sampling of points gives this image

enter image description here

so that depending on the initial interval the bisection method might end at any point between $1.8999$ and $1.9003$


To put this into perspective, the scale $\bar p(x)=|x|^n+|a_{n-1}|\,|x|^{n-1}+..+|a_1|\,|x|+|a_0|$ for this situation, polynomial evaluation for $|x|\le 2$, is provided by the bound $\bar p(2)=p(-2)=3.35367..\cdot 10^9$, so that the expected accuracy bound $\bar p(1.9)\mu$ using a machine constant $\mu=2⋅10^{-16}$ is indeed about $7⋅10^{-7}$, the observed errors are a little smaller.

2
On

Suppose we are utilizing a computer that has $N$-bit precision ($N\geq4$). If we take any integer $n > N$ and we apply the bisection algorithm to the function $f$ defined on $[0,1]$ by $f(x)=\left(x-\frac12\right)^2\left(x-\frac34\right)-2^{-n}$, then the algorithm will output $x=\frac12$ as the point where $f$ has a root. This is because $f(0)=-\frac3{16}-2^{-n}<0$, $f(1)=\frac1{16}-2^{-n}>0$, and our computer isn't capable of distinguishing $\,f\left(\frac12\right)=-2^{-n}$ from zero.

Here is a plot of our function $f$ if we took $n=10$:enter image description here

Notwithstanding the output from applying the bisection algorithm to $f$ on $[0,1]$, we can see that the only zero of $f$ in $[0,1]$ lies somewhere between $\frac34$ and $1$.


As for the meaning of such a "fake" root, I would say it alludes to the fact that the Intermediate Value Theorem is equivalent to the nonconstructive proposition known as the lesser limited principle of omniscience.

Define a binary sequence $\{a_n\}_{n=1}^\infty$ by \begin{equation}a_n=\begin{cases}0 &\text{ iff either } 2k+3 \text{ is the sum of 3 primes for all } k\leq n \text{ or there exists } k<n \text{ s.t. } a_k=1 \\1&\text{ otherwise.}\end{cases} \end{equation} Define $a=\sum_{n=1}^\infty a_n2^{-n}$, and apply the bisection algorithm to the function $f$ defined on $[0,1]$ by $f(x)=\left(x-\frac12\right)^2\left(x-\frac34\right)-a$. As long as our computation is limited to some finite precision, the algorithm will output $x=\frac12$ as a root of $f$. This output is correct (which I take to mean that it is either identical to or approximately close to a root) if and only if the odd Goldbach conjecture is true.

The way that the binary sequence $\{a_n\}_{n=1}^\infty$ is defined is meant to invoke the limited principle of omniscience, a nonconstructive principle which implies the lesser limited principle of omniscience.


Disclaimer (in response to the valid concerns raised by Euro Micelli): My "answer" is not trying to provide affirmation of the question in the title as I would say the answer to the question posed in the title is "not yes". I will note that even arbitrary precision is still subject to available memory and computation time (as far as I know). I figure we have two sides of the same coin, the bisection method is not constructive and so is the definition of the function $f$ on $[0,1]$. Indeed there are ways to preclude such a false output, and my response has only considered the algorithm underlying the proof of the Intermediate Value Theorem in the most classical and basic setting. I respond to questions on this forum by trying to provide the question asker with some insight and perspective to the best of my knowledge given the contents of their initial post.