How are definitions of chaos related?

255 Views Asked by At

Chaotic systems can be defined in many ways. One definition is that the system has a positive Lyapunov exponent, that is, two trajectories starting near each other will diverge exponentially quickly. Another definition is that the system has nonzero Kolmogorov-Sinai entropy, that is, no matter how finely we partition the phase space of the system, on a long enough time scale there is always some uncertainty in the evolution of the discrete system induced by the partition. Both of these capture the notion of sensitivity to initial conditions. Are they equivalent conditions? Is one a necessary condition on the other? Does knowledge of the numerical value of one help derive the other, even approximately?

1

There are 1 best solutions below

0
On BEST ANSWER

Let me try to summarize the situation connecting lyapunov exponents and metric entropy. A good expository reference for this relationship is given here.

Theorem (Ruelle's Inequality): Let $f$ be a $C^1$ map (not necessarily invertible) from $M \to M$ where $M$ is a compact manifold, and let $\mu$ be an invariant measure. For $\mu$-almost every $x$, the lyapunov exponents $\lambda_i(x)$ are defined, and let $\lambda_+(x)$ denote the sum of the positive lyapunov exponents for $f$ at $x$. Then, $$ h_{\mu}(f) \leq \int_M \lambda_+(x) \,d\mu(x) $$ In particular, positive metric entropy implies the existence of (a positive $\mu$-measure set of) points $x \in M$ for which $\lambda_+(x) > 0$.

On the other hand, there are systems $(f, \mu)$ admitting positive Lyapunov exponents for which the metric entropy is actually zero. This happens when Ruelle's inequality is strict. As a trivial example of this, consider a map $f : \mathbb{R}^n \rightarrow \mathbb{R}^n$ with a saddle at the origin, and let $\mu$ be the delta mass at $0$.

A better example: suppose that $\mu$ is supported on a hyperbolic horseshoe and is isomorphic to the $(\frac12, \frac12)$ bernoulli shift (this is the case for any horseshoe with two 'branches'). You can check that $h_{\mu}(f) = \log 2$, but it's possible to see that $\lambda_+(x) > \log 2$ on the horseshoe (as it takes just a little more than $\times 2$ expansion to 'bend around' and form the second branch in a linear horseshoe).

There is a well-developed theory explaining the necessary and sufficient conditions under which Ruelle's inequality is an equality. Roughly speaking, the equality holds iff $\mu$ doesn't 'waste' any expansion into lost regions of phase space.

More precisely: Pesin originally proved that when $\mu$ is absolutely continuous with respect to Lebesgue measure, then Ruelle's inequality is an equality. Then in the mid 80s, Ledrappier and Strelcyn proved that if a measure is SRB, the Ruelle's inequality is an equality. For a reference on SRB measures and their (many, beautiful) properties, see this paper. Ledrappier and Young proved a few years later that, in fact, the converse is true: when Ruelle's inequality is an equality, the measure is SRB.