Evaluating implicit functions numerically by transforming the problem into ODE

334 Views Asked by At

It just occured to me that if we need to evaluate an implicit function given by a nonlinear equation for a range of values, common root-finding methods, like Newton-Raphson, might not be the most economic option. Let's say we have an equation:

$$F(x,y)=C$$

And we want to numerically evaluate $y(x)$ for $x \in [a,b]$.

Then one way to do that is turning the problem into an ODE:

$$F_x dx+F_y dy=0$$

$$\frac{dy}{dx}=-\frac{F_x}{F_y}$$

This equation then could be solved numerically by a suitable finite difference scheme. (On the condition that $F_y \neq 0$ for $x \in [a,b]$).

The initial condition can be obtained by a single application of a root-finding algorithm for the equation:

$$F(a,y)=C$$

If we need more initial conditions (for higher order schemes), they can be obtained by root-finding as well.

Can this method offer a numerical advantage (run-time, complexity, etc.) compared to Newton-Raphson or other algorithms? Does it depend on the nature of $F(x,y)$? Or is this method certainly worse than usual root-finding methods?

The applications I see is computing $y(x)$ for a range of values of $x$ without the need for very high precision (for example, in science applications, when we just need to plot the function).


Here's an example:

$$y^3-x y-1=0$$

The ODE is:

$$\frac{dy}{dx}=\frac{y}{3y^2-x}$$

The initial condition is:

$$y(0)=y_0=\sqrt[3]{1}$$

One useful thing is: by choosing $y_0$ to be one of the three cube roots of unity, we get the three solutions for the original equation. For example:

$$y_0=1$$

gives us precisely the real solution, while $y=e^{2 \pi i/3}$ and $y=e^{4 \pi i/3}$ give the two complex conjugate solutions, as confirmed by computations in Mathematica.

Here's the plot of $y(x)$ with the initial condition $y_0=1$ (blue) obtained from numerically solving the ODE compared to the plot of the exact solution of the cubic equation in radicals (orange):

enter image description here