I'm reading a paper on the computation of covariant Lyapunov vectors (https://arxiv.org/pdf/1212.3961.pdf) and, as I have a Machine Learning background, I have some gaps concerning dynamical systems.
In part 3 of the article, we consider discrete dynamical systems: $$x_{n+1} = f(x_{n})$$ for which the linearized equation in tangent space is given by : $$ w_{n+1} = J(x_{n})w_{n}$$ where $J(x_{n})$ corresponds to the jacobian of $f$ at $x_{n}$. Thus, the evolution operator in tangent space can be represented with following matrice: $$ M_{n,k} = \prod_{i = n}^{k+n-1} J(x_{i})$$
So far, I understand the article but the rest got me a bit confused: they explain that the matrix $(M_{n,k})^{T}M_{n,k}$ "evolves forward a generic tangent space vector from time $n$ to time $n + k$, and then backward in time up to time $n$" (see figure 1.a). At first glance, I thought it was simply the $(M_{n,k})^{-1}M_{n,k}$ matrix that had this role (because $w_{n+k} = M_{n,k}w_{n}$).
Morever, they explain earlier in the article that: the adjoint operator $(DL^{t}(x))^{*}$ : $T_{L^{t}}(x)M \longrightarrow T_{x}M$ represents the time-reversed evolution in tangent space of a reversible dynamical system. Could someone please explain me?
Thanks.
You are right to be a little confused. What really happens is that at the endpoint of the forward evolution the vector is transformed into a linear functional via the scalar product, and then this linear functional is transported backwards via adjoint evolution.
Now the Euclidean scalar product makes everything deceptively simple, so linear functionals, naturally expressed as row vectors, can be represented, again via scalar product, as column vectors, and the adjoint evolution happens via the transposed matrix.