Relation between classical definitions of chaos and exponential divergence of trajectories

1.1k Views Asked by At

There are many definitions of chaos, but perhaps one of the most common is by Devaney:

Let $X$ be a metric space. A continuous map $f:X\to X$ is said to be chaotic on $X$ if

  1. $f$ is transitive,
  2. the periodic points of $f$ are dense in X,
  3. $f$ has sensitive dependence on initial conditions.

where, in turn, we usually define sensitive dependence on initial conditions as:

A funtion $f:X \to X$ has sensitive dependence on initial conditions if there exists $\delta >0$ such that, for every $x \in X$ and any neighborhood $N$ of $x$, there exists $y \in N$ and $n \geq 0$ such that $|f^n(x)-f^n(y)| > \delta$.

Some actually define maps to be chaotic when they have a positive Lyapunov exponent. Informally this is a measure of how fast trajectories diverge from each other, and for chaotic maps it is exponentially fast (positive Lyapunov exponent). Almost no matter how you define chaos (sensibly), it seems they all have this property, and it has therefore been called an indication of chaos.

Question: Have I misunderstood something trivial, or how do the ‘usual ingredients’ of chaos (such as the ones in Devaney’s definition) imply an exponential divergence of trajectories? It seems to me that a map could have sensitive dependence without having an exponential divergence of trajectories (?), for example. In short, what in most definitions of chaos implies exponential (as opposed to just a linear) divergence of trajectories? Is this a simple matter, or perhaps a more complicated one?

1

There are 1 best solutions below

3
On BEST ANSWER

In all¹ dynamical systems, be they continuous (CT) or discrete (DT) in time, chaotic or regular, the distances of infinitesimally close trajectories follow an exponential behaviour on average.

For a one-dimensional map $f$ (DT), this can be seen as follows:

  1. Consider two sufficiently close points $x_0$ and $y_0$. Denote $x_t := f(x_{t-1}) = f^t(x_0)$ and $y_t := f^t(y_0)$

  2. Then we have:

    $$|x_t - y_t| =|f(x_{t-1}) - f(y_{t-1})| ≈ f'(x_{t-1}) |x_{t-1}-y_{t-1}|.$$

    This is due to the map being piecewise smooth. (We might as well replace $f'(x_t)$ with $f'(y_t)$.) With other words, the dynamics is locally linearisable.

  3. We can iterate this backwards:

    $$|x_t - y_t| = f'\left(x_{t-1}\right) · f'\left(x_{t-2}\right) · … · f'\left(x_0 \right) · |x_0 - y_0|$$

    Now, the values of $f'$ may differ for each iteration, e.g., such that you get that $|x_t - y_t|$ grows linearly with $t$ (at least at first). However, when averaging over trajectories or for $t→∞$ (as in the definition of the largest Lyapunov exponent), these differences become irrelevant and you obtain an exponential growth:

    $$|x_t - y_t| = μ^t · |x_0 - y_0|,$$

    with $μ$ being an appropriate average over the values of $f'$.

For multidimensional maps or continuous-time systems, things become a little bit more complicated as $f'$ becomes multidimensional or replaced with the Jacobian of the dynamics and you have to take the direction of the distance vector ($x_t - y_t$) into account. The latter will align itself along the direction of largest growth (or least shrinking) over time. After that, you essentially have the same situation.


So, in all¹ dynamical systems, nearby trajectories exponentially shrink or grow or remain constant. The sensitive dependence on initial conditions required by the definition of chaos excludes the cases of shrinking or constancy, and thus you arrive at exponential growth.


¹ except maybe for some carefully crafted pathological cases