disambiguating the idea of a "fixed point" in nonlinear odes versus numerical analysis

33 Views Asked by At

I have found that the term "fixed point" gets used in a couple of different math contexts, and that the definitions of the term seem entirely different. I was just wondering whether these two usages are equivalent at some deep level, or just a matter of colloquial usage?

Fixed points in nonlinear ordinary differential equations--a la Strogatz's book Nonlinear Dynamics and Chaos--refers to a point where the derivative is equal to zero:

$$ \frac{dx}{dt} = f(x) $$

where $f(x)$ is a nonlinear function, then there exists a fixed point where:

$$ f(x) = 0 $$

meaning that the first derivative $x'(t) = 0$. This approach makes sense in the sense that the evolution of the system depends on the stability properties at the fixed points.

The second definition of a fixed point comes from numerical analysis, where there is this idea of "fixed point iteration." In this case, a fixed point is defined as:

$$ g(x) = x $$

This definition seems very different than the definition used in Strogatz. I am not clear on who the numerical methods definition of fixed points is used, but it would not seem to help in solving nonlinear odes?

I was just wondering if there is any connection between these two different definitions of fixed points that I am missing. Or is it just a language thing?

For my own purposes, I have to find the fixed points for a large system of nonlinear odes, and hence I have to use numerical root finding methods to solve for $f(x) = 0$. However, whenever I look at this fixed point topic in papers or computer libraries, I will see references to $0 = g(x) - x$, which is of course the second definition of a fixed point.

Thanks for any explanations.