Can numerical methods be used to prove a root does NOT exist?

206 Views Asked by At

Let's say I have a continuous function $f : I \to \mathbb R$ for $I = [a,b]$ and I want to decide if it has a root or not in $I$. Pretend that I can evaluate it anywhere but cannot use analytical methods to learn anything more about it.

I could obtain a grid of points $a = p_1 < \dots < p_n = b$, evaluate $f$ on each point, and make a scatterplot with some kind of interpolation. From this I may find a root, but if I don't see a root can I ever be confident that one actually does not exist? Can I know that there isn't wild behavior between some pair of points that I missed? Eg maybe for some $i$ the function dips down below $0$ really rapidly right after $p_i$ and returns right before $p_{i+1}$, so it looks flat but just because I've missed something.

My question: what are the circumstances under which we can use a finite number of finite precision function evaluations to prove a root does not exist?

My current guess is that if $f$ is Lipschitz then we could use its Lipschitz constant $K$ to make our grid fine enough that there's no way that $f$ could have a root between pairs of grid points if it doesn't visibly have one in the interpolated scatterplot. But if $K$ is large or $f$ is really close to $0$ then we may have a situation where the grid is required to be finer than finite precision can do.

I also wonder if convexity would do the trick (which since it's stronger than Lipschitz on $I$ seems like a natural thing to try next).

2

There are 2 best solutions below

6
On BEST ANSWER

In exact arithmetic, one way to do this is to obtain a modulus of continuity $\omega_f$. This is (non-uniquely) defined by the property that if $|x-y|<\omega_f(x,\epsilon)$ then $|f(x)-f(y)|<\epsilon$. (There is of course a maximal $\omega_f$, but we rarely have it in hand.)

Suppose you have such a $\omega_f$ and you evaluate $f$ at some points $x_i$. If $|x_i-x_{i+1}|<\omega_f(x_i,|f(x_i)|)$ for all $i$, then $f$ has no zero in the interval.

In inexact arithmetic, you can do something similar, you just need to require your approximate values of $f$ to be bounded a bit further away from zero, depending on your precision level.

1
On

This reminds me of Kahan's spy function, which proves that there is no deterministic algorithm which can integrate all functions numerically. We can apply the same idea to your root existence routine. Consider a function spy, which simply records its input. Pass it to your routine. Now use the record produced by spy to create a continuous function malicious which uses those points to (say) always evaluate to 1 at the grid, yet still has a root. A $C^{\infty}[a, b]$ example would be an inverted bump function scaled to fit between each pair of grid nodes and dip down under $0$ at the peak. I'm sure you could also construct a power series that functions as a suitable malicious as well.

Now let's say you use a random set of nodes. Use any one of the divergent sequences which approximate the Dirac delta function (delta sequence), truncate the series (so it meets whatever regularity criterion you want) and place the peak at a random point. Now you have some very low probability that your root existence routine will find a positive and negative value of your function, meaning of course that it can't prove anything.