How to find the integral curves that are orbits of one-parameter groups?

288 Views Asked by At

Consider $\mathbb{R}^2$ with standard symplectic structure and inner product. Consider a Hamiltonian $$H=(x,y)A(x,y)^t$$ where $$A=\begin{pmatrix} \alpha & \beta \\ \beta & \delta \end{pmatrix}$$ with entries in $\mathbb{R}.$ I have to determine for which $A$ are the integral curves given by orbits of one-parameter groups acting by isometries of $\mathbb{R}^2$ and need to give explicit formula for such an orbit. I have problems here. One is I don't understand what I am supposed to do.

I understand the following:

Let $M$ be a smooth manifold. If $\theta: \mathbb{R} \times M \to M $ is a global flow, then we immediately have following maps:

$\forall t \in \mathbb{R}, \quad \theta_ t: M \to M$ is diffeomorphism.

$\forall \ p \in M,$ we have smooth curve $\theta ^ {(p)}: \mathbb{R} \to M$ with image as the orbit of $p$ under the group action.

And lastly, $\mathbb{R} \to Diff(M)$ is a group homomorphism.

I don't know how to use this understanding to solve my problem. Please help!

1

There are 1 best solutions below

2
On BEST ANSWER

The standard symplectic structure on $\Bbb R^2$ is given by the two-form

$\omega = dx \wedge dy, \tag{1}$

and the standard inner product $\langle \cdot, \cdot \rangle$ is

$\langle \mathbf r_1, \mathbf r_2 \rangle = \mathbf r_1^T \mathbf r_2 = x_1 x_2 + y_1 y_2, \tag{2}$

where

$\mathbf r_i = \begin{pmatrix} x_i \\ y_i \end{pmatrix} \in \Bbb R^2, \; \; i = 1, 2. \tag{3}$

We note that taking

$\mathbf e_x = \begin{pmatrix} 1 \\ 0 \end{pmatrix} = \dfrac{\partial}{\partial x} \tag{4}$

and

$\mathbf e_y = \begin{pmatrix} 0 \\ 1 \end{pmatrix} = \dfrac{\partial}{\partial y},\tag{5}$

we have

$\omega (\mathbf e_x, \cdot) = dx (\dfrac{\partial}{\partial x}) dy = dy \tag{6}$

and

$\omega(\mathbf e_y, \cdot) = (dx \wedge dy)(\dfrac{\partial}{\partial y }) = -(dy \wedge dx)(\dfrac{\partial}{\partial y }) = -dy(\dfrac{\partial}{\partial y })dx = -dx. \tag{7}$

$\omega$ may of course be construed as a linear map $T_\omega: \Bbb R^2 \to (\Bbb R^2)^\ast$, the dual space of $\Bbb R^2$, by defining, for $\mathbf v \in \Bbb R^2$,

$T_\omega(\mathbf v) = \omega(\mathbf v, \cdot); \tag{8}$

then

$(T\omega(\mathbf v))(\mathbf w) = \omega(\mathbf v, \mathbf w). \tag{9}$

(6) and (7) show that the matrix of $T_\omega$, using the bases $\mathbf e_1$, $\mathbf e_2$ of $\Bbb R^2$, and $dx$, $dy$ of $(\Bbb R^2)^\ast$, is given by

$[T_\omega] = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}; \tag{10}$

this matrix representation of $\omega$ will prove convenient in the computations which follow. We also recall that we may, with the aid of the inner product $\langle \cdot, \cdot \rangle$ on $\Bbb R^2$, identify vectors $\mathbf v$ in $\Bbb R^2$ with dual vectors in $(\Bbb R^2)^\ast$, via the map $\phi$ which sends $\mathbf v$ to $\phi(\mathbf v) = \langle \mathbf v, \cdot \rangle$, that is

$(\phi(\mathbf v))(\mathbf w) = \langle \mathbf v, \mathbf w \rangle, \tag{11}$

holding for all $\mathbf w \in \Bbb R^2$. The map $\phi$ of course identifies elements of $\Bbb R^2$ with those of $(\Bbb R^2)^\ast$, but in a different way than $\omega$; it is easy to see that

$\phi(\mathbf e_x) = dx \tag{12}$

and

$\phi(\mathbf e_y) = dy; \tag{13}$

for example,

$(\phi(\mathbf e_x))(\mathbf e_x) = \langle \mathbf e_x, \mathbf e_x \rangle = 1 = dx(\mathbf e_x), \tag{14}$

and

$(\phi(\mathbf e_x))(\mathbf e_y) = \langle \mathbf e_x, \mathbf e_y \rangle = 0 = dx(\mathbf e_y), \tag{15}$

establishing (12); of course (13) is similarly shown. (12)-(15) are illustrative of the self-duality property of inner product spaces; that is, vectors in such spaces may be identified with elements of the dual space via maps such as $\phi$ which arise naturally from an inner product structure. Of course, the matrix of $\phi$ with respect to the bases used above for $\omega$ is

$[\phi] = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = I, \tag{16}$

and we observe that the map $\phi^{-1} \circ T_\omega : \Bbb R^2 \to \Bbb R^2$ satisfies

$\phi^{-1} \circ T_\omega(\mathbf e_x) = \mathbf e_y \tag{17}$

and

$\phi^{-1} \circ T_\omega(\mathbf e_y) = -\mathbf e_x \tag{18}$

with matrix

$[\phi^{-1} \circ T_\omega] = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}. \tag{19}$

(18) in fact may clearly be regarded as an expression of the symplectic form $\omega$ in terms of the corresponding representation as an opertor on $\Bbb R^2$; either form may be used in computations, and I will so do in the following.

I have belabored these preliminary points at some length in order to resolve any ambiguities with the definitions my readers may experience due to possibly differing backgrounds.

Having seen these things, we turn to the Hamiltonian at hand,

$H = (x,y)A(x,y)^T, \tag{20}$

where

$A = \begin{bmatrix} \alpha & \beta \\ \beta & \delta \end{bmatrix}; \tag{21}$

we see from (20), (21) that

$H = (x, y)\begin{bmatrix} \alpha & \beta \\ \beta & \delta \end{bmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = (x, y) \begin{pmatrix} \alpha x + \beta y \\ \beta x + \delta y \end{pmatrix} = \alpha x^2 + 2\beta xy + \delta y^2. \tag{22}$

We pause to observe that there is no loss of generality in taking $A$ to be a symmetric matrix, i.e., taking $A_{12} = A_{21} = \beta$ rather than, say $A_{12} = \beta$, $A_{21} = \gamma$, since in this case the Hamiltonian would be

$H = \alpha x^2 + (\beta + \gamma) xy + \delta y^2. \tag{23}$

Inspection of (22) reveals that the coefficient of the $xy$ term in $H$ depends only on the sum $\beta + \gamma$, and not on $\beta$ and $\gamma$ separately. Another way to see this is to write

$A = A_+ + A_-, \tag{24}$

where $A_+$, $A_-$ are the symmetric and skew-symmetric parts of $A$:

$A_+ = \dfrac{A + A^T}{2} = A_+^T = \begin{bmatrix} \alpha & \dfrac{\beta + \gamma}{2} \\ \dfrac{\beta + \gamma}{2} & \delta \end{bmatrix}, \tag{25}$

$A_- = \dfrac{A - A^T}{2} = -A_-^T = \begin{bmatrix} 0 & \dfrac{\beta - \gamma}{2} \\ \dfrac{\gamma - \beta }{2} & 0 \end{bmatrix}. \tag{26}$

Since for any skew-symmetric matrix $K$ and vector $\mathbf z$ we have

$\mathbf z^T K z = 0, \tag{27}$

we see that

$(x, y) A (x, y)^T = (x, y) (A_+ + A_-)(x, y)^T$ $= (x, y)A_+(x, y)^T + (x, y)A_-(x, y)^T = (x, y)A_+(x, y)^T, \tag{28}$

i.e., the quadratic form $(x, y) A (x, y)^T$ only depends on the symmetric part of $A$; the independence of $\beta$ and $\gamma$ "washes out", as it were; we need only consider the case $A_{21} = \gamma = \beta = A_{12}$ when evaluating quadratic forms such as $H$.

Returning to the problem at hand, we have from (21)

$dH = \dfrac{\partial H}{\partial x}dx + \dfrac{\partial H}{\partial y}dy = (2\alpha x + 2\beta y)dx + (2\beta x + 2\delta y)dy. \tag{29}$

The vector field $\mathbf X_H$ driving the corresponding Hamiltonian dynamical system by definition satisfies

$dH = \omega(\mathbf X_H, \cdot) = T_\omega(\mathbf X_H); \tag{30}$

setting

$\mathbf X_H = \begin{pmatrix} X_x \\ X_y \end{pmatrix}, \tag{31}$

we may apply (17)-(19) to (30), (31) and find

$\phi^{-1} \circ T_\omega (\mathbf X_H) = \phi^{-1}(dH) = \phi^{-1}(\dfrac{\partial H}{\partial x}dx + \dfrac{\partial H}{\partial y}dy) = \dfrac{\partial H}{\partial x} \phi^{-1}(dx) + \dfrac{\partial H}{\partial y} \phi^{-1}(dy)$ $= \dfrac{\partial H}{\partial x} \mathbf e_x + \dfrac{\partial H}{\partial y} \mathbf e_y, \tag{32}$

and

$\phi^{-1} \circ T_\omega(\mathbf X_H) = \phi^{-1} \circ T_\omega(X_x \mathbf e_x + X_y \mathbf e_y)$ $= X_x \phi^{-1} \circ T_\omega(\mathbf e_x) + X_y\phi^{-1} \circ T_\omega(\mathbf e_y) = X_x \mathbf e_y - X_y \mathbf e_x. \tag{33}$

Comparing (32) and (33) we see that

$\mathbf X_H = \begin{pmatrix} X_x \\ X_y \end{pmatrix} = \begin{pmatrix} \dfrac{\partial H}{\partial y} \\ -\dfrac{\partial H}{\partial x} \end{pmatrix} = \begin{pmatrix} 2 \delta y + 2\beta x \\ -2\alpha x - 2 \beta y \end{pmatrix} = \begin{bmatrix} 2\beta & 2\delta \\ -2 \alpha & -2 \beta \end{bmatrix} \begin{pmatrix} x \\ y \end{pmatrix}; \tag{34}$

the Hamiltonian vector field is thus linear with coefficient matrix

$B = \begin{bmatrix} 2\beta & 2\delta \\ -2 \alpha & -2 \beta \end{bmatrix}, \tag{35}$

and the Hamiltonian differential equations in this case become, setting

$\mathbf r = \begin{pmatrix} x \\ y \end{pmatrix}, \tag{36}$

$\dot {\mathbf r} = B \mathbf r. \tag{37}$

At Last, we are now and finally in a position to dispose of the posed questions in rather short order. First, and explicit formula for the flow $\theta_t$ may be written

$\theta_t(\mathbf r_0) = e^{Bt} \mathbf r_0, \tag{38}$

as follows for any constant-coefficient linear system. Here $\mathbf r_0 = (x_0, y_0)^T$ is the initial point. Explicit formulas for $\theta_t(\mathbf r_0)$ may now be derived in the usual fashion by computing $e^{Bt}$ explicitly using the Jordan form of $B$, that is, by calculating its eigenvalues and their invariant subspaces. However, a lot of this effort may be short-circuited by addressing the first question, that is, for which $A$ does the flow $\theta_t(\mathbf r_0) = e^{Bt} \mathbf r_0$ act by isometries? this is pretty easy. One-parameter groups of such linear isometries of the form $e^{Bt}$ of $\Bbb R^2$ are characterized by the property that

$\langle e^{Bt} \mathbf r_1, e^{Bt} \mathbf r_2 \rangle = \langle \mathbf r_1, \mathbf r_2 \rangle; \tag{39}$

for any and all vectors $\mathbf r_1, \mathbf r_2 \in \Bbb R^2$. Differentiating (38) with respect to $t$ yields

$\langle Be^{Bt} \mathbf r_1, e^{Bt} \mathbf r_2 \rangle + \langle e^{Bt} \mathbf r_1, Be^{Bt} \mathbf r_2 \rangle = 0; \tag{40}$

setting $t = 0$ in (40) yields

$\langle B\mathbf r_1, \mathbf r_2 \rangle + \langle \mathbf r_1, B \mathbf r_2 \rangle = 0; \tag{41}$

in the usual manner (41) implies

$\langle \mathbf r_1, B^T \mathbf r_2 \rangle + \langle \mathbf r_1, B \mathbf r_2 \rangle = 0, \tag{42}$

or

$\langle \mathbf r_1, (B^T + B)\mathbf r_2 \rangle = 0. \tag{43}$

Since $\mathbf r_1$ and $\mathbf r_2$ are arbitrary, this yields

$B^T + B = 0, \tag{44}$

or

$B^T = -B; \tag{45}$

$B$ must be skew-symmetric. Examining (35) in the light of this discovery shows that $\alpha = \delta$, $\beta = 0$; from (21) we see that $A$ must be of the form

$A = \begin{bmatrix} \alpha & 0 \\ 0 & \alpha \end{bmatrix} = \alpha I; \tag{46}$

when written out in terms of components the Hamiltonian equations thus become

$\dot x = 2 \alpha y, \tag{47}$

$\dot y = -2 \alpha x; \tag{48}$

after a little fiddling around (47)-(48) may be written

$\ddot x + 4 \alpha^2 x = 0; \tag{49}$

this is the equation of a harmonic oscillator with force constant-to-mass ratio $4\alpha^2$; at this point it's probably safe, explanation and derivation-wise, to cut to the chase scene and realize we can write the solutions of (47)-(48) as

$x(t) = x_0 \cos 2\alpha t + y_0 \sin 2 \alpha t, \tag{50}$

$y(t) = y_0 \cos 2\alpha t - x_0 \sin 2 \alpha t; \tag{51}$

(50), (51) explicitly describe the flow $\theta_t$ of our system, to wit:

$\theta_t((x_0, y_0)^T) = (x_0 \cos 2\alpha t + y_0 \sin 2 \alpha t, y_0 \cos 2\alpha t - x_0 \sin 2 \alpha t)^T. \tag{52}$

And I think we're finally done!!!

Note: the definitive formula (30) and one hell of a lot of other stuff about Hamiltonian systems may be found in Abraham and Marsden's awesome classic, Foundations of Mechanics, available for free and legal download from Caltech; see here. A very thorough and advanced work, chock full of the good stuff. End of Note.

Hope this helps. New Year's Blessings to One and All,

and as ever,

Fiat Lux!!!