Brute-force bifurcation diagram

229 Views Asked by At

I know there is unaccepted answer in this question on this problem. But let’s make it clear, really. I believe it can serve as great reference starting point.

Suppose that I have autonomous system $\mathcal{D(\dot{y},y,\mu)}$ given by ODE/DAE and some parameter $\mu$. Using implicit an integration scheme, I can obtain a bunch of solutions for different (consistent) initial conditions $y_0(t_0)$ for some parameter value $\mu$.

Now, I know there are solutions which are stable (not oscillatory), limit cycles (harmonic oscillatory solution with period $T$ with possible harmonics), damped oscillatory solutions, quasiperiodic solutions and probably some chaotic behaviour. I am interested in creating a bifurcation diagram.

In terms of pseudocode, how can I achieve this?

In particular, I am confused about picking points of the time series. How do I choose them (sampling period $T$)? What is the rule in such cases?

For illustration, the time series below are for some fixed parameter value $\mu_1$ and three different initial conditions. They clearly exhibit chaotic behaviour:

enter image description here

For same set of initial conditions but a different $\mu_2$, $\mu_2\neq\mu_1$, I obtain these time series:

enter image description here

We can see, that depending on initial condition, only a phase difference occurs, but all trajectories tends to be same.

Can you provide a commented procedure for this? In papers, there are no explicit descriptions on how they derived their diagrams.

2

There are 2 best solutions below

2
On BEST ANSWER

In particular, I am confused about picking points of time series. How do I choose them (sampling period $T$)? What is the rule in such cases?

To make a bifurcation diagram for a continuous-time system, you first need to make a Poincaré map. Specifically, you obtain a sequence of values of a reasonable observable at a somewhat systematically determined points of your oscillation. The most common choice would be the sequence of local minima or maxima of one dynamical variable, but you can also go for observables like the slope at zero crossings or similar. Usually, you choose some aspect of the dynamics that is relevant to your research.

Note that you cannot just sample at a fixed time interval, since there is no fixed period length between different parameters and not even within one parameter in case of chaotic dynamics.

Except for using the Poincaré map instead of the actual map, the method is the same as for discrete-time systems:

  1. Choose a reasonably fine sampling of the parameter interval you are interested in, say $μ_i = μ_\text{min} + \frac{i}{N} (μ_\text{max}-μ_\text{min})$ with $i \in {1, …, N}$.
  2. For each $μ_i$:
    1. Evolve the system (using that $μ_i$) long enough to get rid of transients.
    2. Further evolve the system for a sufficient amount of time and extract the values of the Poincaré map (let them be $a_1, a_2, …$).
    3. Make dots at $(μ_i,a_1), (μ_i,a_2), …$ in your bifurcation diagram.

You only need to consider multiple initial conditions if your system is multistable, which you should also notice as interrupted lines on a good bifurcation diagram (unless you have two or more competing chaotic or quasiperiodic attractors).

2
On

To answer your first question, to select the timestep you need to satisfy the following

  1. Your timestep is small enough that you can resolve your features. You may attempt to bound how fast your function can change (e.g. via Lipshitz continuity). If you find such a bound, you can directly figure out a timestep small enough to capture all local fluctuations. Note however, that some evil functions (e.g. $\sin \frac{1}{x}$) don't have such a bound, in which case you are screwed :D
  2. Your timestep has to be such that the error of your numerical integration scheme (e.g. Runge-Kutta) on the sampling timescale is smaller than the changes you observe due to your parameter. One way to approximate the error is to use two different numerical integration procedures (e.g. order 2 and 4) and approximate the error via difference between the results.