Numerical boundary-finding algorithms

143 Views Asked by At

I'm very familiar with numerical root-finding algorithms like Brent's method (scipy.optimize.brentq) or Chandrupatla's method. What if I am looking for a boundary numerically?

Suppose for example that I have an unknown monotonically increasing function $y = g(x)$ between $x=a$ and $x=b$ as represented by the boundary of some region $f(x,y) \ge 0$.

I cannot evaluate $g(x)$ directly; instead, I can evaluate $f(x,y)$ and compare it to zero. For any fixed value of $x=x_0$ I can use a root-finding method to determine $y$ such that $f(x_0,y)=0$.

Aside from sweeping across the interval $x\in[a,b]$ and repeating a root-finding method for each value of $x$, is there a more efficient way? (Suppose I know that $g(x)$ is continuous, for example, and that its derivative $dg/dx$ is "mostly" continuous; then the solution to $y=g(x_0+\Delta x)$ is close to $y=g(x_0)$ .)

2

There are 2 best solutions below

0
On

If you know that $g$ is Lipschitz continuous with Lipschitz constant $K$, then

$$g(x)-K\Delta\le g(x+\Delta)\le g(x)+K\Delta x\tag1$$

will provide reasonably tight bounds if $\Delta$ is sufficiently small.

If you know that $g$ is differentiable, then the first order Taylor

$$g(x+\Delta)=g(x)+g'(x)\Delta+o(\Delta)\tag2$$

is an even better estimate. This may be supplemented into the above bounds to give even faster convergence, or extended further if more derivatives are known.

If $K$ is unknown, then it may suffice to use an initial estimate $\tilde K>0$. Whenever $(1)$ fails to hold, double $\tilde K$ and try again. Another simple alternative strategy if $\Delta x$ is always very small would be to use $\tilde K=k|g'(x)|$ for some $k>1$.

If $g'(x)$ is unknown, then it may suffice to instead use a finite difference estimate in its place from the last two points. Higher order estimates can of course be derived by using more points too.

If $g$ is continuously twice differentiable then we actually know the error of $(2)$ to be bounded by $L\Delta^2$ for some $L>0$, even if you replace $g'(x)$ by an estimate with $\mathcal O(\Delta)$ error. Following a similar situation as with $K$ and $(1)$, you could extend $(2)$ to give the bounds

$$g(x)+g'(x)\Delta x-L\Delta^2\le g(x+\Delta)\le g(x)+g'(x)\Delta x+L\Delta^2\tag3$$

and so on for higher order derivatives. Analogously, such an $L$ may be estimated in the same ways as $K$.

1
On

Your assumptions are very weak. If stronger assumptions are made, then I can see the some options.

If $f = f(x,y)$ is differentiable, $\frac{\partial f}{\partial y}(x,g(x)) \not =0$, and $g$ is differentiable, then $g$ satisfies the ordinary differential equation $$ g'(x) = - \frac{\frac{\partial f}{\partial x}(x,g(x))}{\frac{\partial f}{\partial y}(x,g(x))}$$ Why? By the definition of $g$ we have $$\phi(x) = f(x,g(x)) = 0.$$ By the chain rule we have $$0 = \phi'(x) = \frac{\partial f}{\partial x}(x,g(x)) + \frac{\partial f}{\partial y}(x,g(x)) g'(x).$$ In principle, you can then recover $g$ using your favorite numerical method for solving ODEs. However, without some form of stabilization we will move away from the true solution. Instead, you can use the known value of $g(x)$ and the ODE to generate two different approximations of $g(x + h)$. Ex. You could use $1$ step of Euler's explicit method with stepsize $h$ to produce the $y_0 \approx g(x+h)$ and $2$ steps of Euler's explicit method with stepsize $\frac{1}{2}h$ to produce $y_1 \approx g(x+h)$. It is unlikely that $y_0 = y_1$. You can then use the secant method to solve $$f(x+h,y) = 0$$ using $y_0$ and $y_1$ to initialize the search for the solution $y=g(x+h)$.

In the above analysis I do not see any good ways to exploit the assumption that $g$ is assumed to be increasing. It is mildly helpful to know that $g(x) \leq y = g(x+h)$, so that $g(x)$ is can be be used to bound $y$ from below, but I also need an upper bound for $y$ before I can use a safe method such as bisection to solve for $y$.

If we are only willing to assume that $g$ is differentiable, then I would slide from $g(x)$ and $g(x-h)$ to an approximation of $g(x+h)$ using linear extrapolation, i.e. $$ g(x+h) \approx 2 g(x) - g(x-h).$$ I would use the secant method to solve $f(x,+h,y)=0$ with $y_0 = g(x)$ and $y_1 = 2g(x) - g(x-h)$. In this setting it is (mildly) useful to know that $g$ is strictly increasing. We would then be certain that $y_0 \not = y_1$ and it would be possible to do at least one secant step.


I would be interested to know to what extent you can strengthen your assumptions on $f$ and $g$.