I'm very familiar with numerical root-finding algorithms like Brent's method (scipy.optimize.brentq) or Chandrupatla's method. What if I am looking for a boundary numerically?
Suppose for example that I have an unknown monotonically increasing function $y = g(x)$ between $x=a$ and $x=b$ as represented by the boundary of some region $f(x,y) \ge 0$.
I cannot evaluate $g(x)$ directly; instead, I can evaluate $f(x,y)$ and compare it to zero. For any fixed value of $x=x_0$ I can use a root-finding method to determine $y$ such that $f(x_0,y)=0$.
Aside from sweeping across the interval $x\in[a,b]$ and repeating a root-finding method for each value of $x$, is there a more efficient way? (Suppose I know that $g(x)$ is continuous, for example, and that its derivative $dg/dx$ is "mostly" continuous; then the solution to $y=g(x_0+\Delta x)$ is close to $y=g(x_0)$ .)
If you know that $g$ is Lipschitz continuous with Lipschitz constant $K$, then
$$g(x)-K\Delta\le g(x+\Delta)\le g(x)+K\Delta x\tag1$$
will provide reasonably tight bounds if $\Delta$ is sufficiently small.
If you know that $g$ is differentiable, then the first order Taylor
$$g(x+\Delta)=g(x)+g'(x)\Delta+o(\Delta)\tag2$$
is an even better estimate. This may be supplemented into the above bounds to give even faster convergence, or extended further if more derivatives are known.
If $K$ is unknown, then it may suffice to use an initial estimate $\tilde K>0$. Whenever $(1)$ fails to hold, double $\tilde K$ and try again. Another simple alternative strategy if $\Delta x$ is always very small would be to use $\tilde K=k|g'(x)|$ for some $k>1$.
If $g'(x)$ is unknown, then it may suffice to instead use a finite difference estimate in its place from the last two points. Higher order estimates can of course be derived by using more points too.
If $g$ is continuously twice differentiable then we actually know the error of $(2)$ to be bounded by $L\Delta^2$ for some $L>0$, even if you replace $g'(x)$ by an estimate with $\mathcal O(\Delta)$ error. Following a similar situation as with $K$ and $(1)$, you could extend $(2)$ to give the bounds
$$g(x)+g'(x)\Delta x-L\Delta^2\le g(x+\Delta)\le g(x)+g'(x)\Delta x+L\Delta^2\tag3$$
and so on for higher order derivatives. Analogously, such an $L$ may be estimated in the same ways as $K$.