ODE problem reading do Carmo's book of Riemannian geometry

262 Views Asked by At

I'm reading do Carmo's book, Riemannian geometry. I have a problem at the Jacobi fields. He talks about the case of constant curvature. He gets to the ODE above and my problem is how dose he solve it? Can some one fill in the details? Thanks a lot!

As a result, the Jacobi equation can be written as

$$\frac{D^2J}{dt^2}+KJ~=~0$$

Let $\omega(t)$ be a parallel field along $\gamma$ with $\langle\gamma'(t),\omega(t)\rangle=0$ and $|\omega(t)|=1$. It is easy to verify that

$$J(t)~=~\begin{cases}\frac{\sin(t\sqrt{K})}{\sqrt{K}}\omega(t),~~~&\text{if}~K>0\\t\omega(t),~~~&\text{if}~K=0\\\frac{\sinh(t\sqrt{-K})}{\sqrt{-K}}\omega(t),~~~&\text{if}~K<0\end{cases}$$

is a solution of $(2)$ with initial conditions $J(0)=0,J'(0)=\omega(0)$

2

There are 2 best solutions below

2
On BEST ANSWER

Let me clear the relation between the covariant derivative formalism and the standard ODE formalism. Given a parallel vector field $\omega(t)$ along $\gamma(t)$ which satisfies the conditions written, let us try and find a solution for the Jacobi equation of the form $J(t) = f(t) \omega(t)$ where $f \colon I \rightarrow \mathbb{R}$ is a scalar function. By the product rule and the fact that $\omega$ is parallel, we have

$$ \frac{DJ}{dt}(t) = f'(t) \omega(t) + f(t) \frac{D\omega}{dt}(t) = f'(t) \omega(t),\\ \frac{D^2J}{dt}(t) = f''(t) \omega(t) $$

so the equation becomes

$$ f''(t) \omega(t) + K f(t) \omega(t) = (f''(t) + Kf(t)) \omega(t) = 0. $$

Since $\omega(t) \neq 0$ for all $t \in I$, we must have $f''(t) + Kf(t) = 0$ for all $t \in I$ and in addition, by the initial conditions, we must also have

$$ J(0) = 0 \iff f(0) = 0, J'(0) = f'(0) \omega(0) = \omega(0) \iff f'(0) = 1. $$

Hence, to find a solution for the Jacobi equation of the form above we must solve the second order constant coefficients scalar ODE

$$ f''(t) + K f(t) = 0 $$

with initial conditions

$$ f(0) = 0, f'(0) = 1. $$

0
On

OK, here are some details:

First of all, I assume $D/dt$ is covariant differentiation along $\gamma(t)$, i.e.,

$\dfrac{D}{dt} \equiv \nabla_{\gamma'(t)}, \tag 1$

where $\nabla$ is the Levi-Civita connection associated with the metric $\langle \cdot, \cdot \rangle$ on our manifold. Now if $\omega(t)$ is parallel along the curve $\gamma(1)$, then

$\dfrac{D\omega}{dt} = \nabla_{\gamma'(t)} \omega = 0, \tag 2$

and if $f(t)$ is any twice-differentiable function defined along $\gamma(t)$, we have by the Leibnitz rule,

$\dfrac{D(f\omega)}{dt} = \dfrac{df}{dt}\omega + f \dfrac{D\omega}{dt} = \dfrac{df}{dt}\omega, \tag 3$

which of course follows from (2):

$\dfrac{D(f\omega)}{dt} = \nabla_{\gamma'(t)} (f \omega) = \gamma'(t)[f] \omega + f \nabla_{\gamma'(t)}\omega = \dfrac{df}{dt} \omega ; \tag 4$

thus

$\dfrac{D^2(f\omega)}{dt^2} = \dfrac{D}{dt} \left ( \dfrac{D(f\omega)}{dt} \right) = \dfrac{D}{dt} \left ( \dfrac{df}{dt} \omega \right ) = \dfrac{d}{dt} \left ( \dfrac{df}{dt} \right ) \omega = \dfrac{d^2 f}{dt^2} \omega; \tag 5$

now if

$J = f\omega \tag 6$

satisfies

$\dfrac{D^2 J}{dt^2} + KJ = 0, \tag 7$

we may, via (5), write

$\left (\dfrac{d^2f}{dt^2} + K f \right ) \omega = \dfrac{d^2f}{dt^2} \omega + K f \omega = \dfrac{D^2(f\omega)}{dt^2} + Kf\omega = \dfrac{D^2J}{dt^2} + KJ = 0; \tag 8$

since $\vert \omega(t) \vert = 1$ along $\gamma(t)$, we have $\omega(t) \ne 0$ on $\gamma(t)$, whence

$\dfrac{d^2f}{dt^2} + K f = 0; \tag 9$

furthermore,

$J(0) = 0 \Longrightarrow f(0)\omega(0) = 0 \Longrightarrow f(0) = 0, \tag{10}$

$J'(0) = \omega(0) \Longrightarrow f'(0) \omega(0) = \omega(0) \Longrightarrow f'(0) = 1; \tag{11}$

so now we see that solving the covariant vector equation (7) with $J = f\omega$ equivalent to solving the plain and ordinary scalar differential equation (9) with $f(0) = 0$, $f'(0) = 1$. So how do we do that?

Of course there are a variety of well-known methods for solving a constant-coefficient, linear ordinary differential equation such as (9); one can simply start grinding out a power series, which is a completely deterministiic process which involves no guessing; but if one is willing to invoke a little intuition, one can make an "informed guess" that a solution to (9) might be of the form

$f(t) = e^{\mu t}; \tag{12}$

then plugging this into (9) yields

$\mu^2 e^{\mu t} + Ke^{\mu t} = 0 \Longrightarrow \mu^2 + K = 0, \tag{13}$

whence in the usual manner,

$K > 0 \Longrightarrow \mu = \pm i \sqrt{K}, \tag{14}$

$K = 0 \Longrightarrow \mu = 0; \tag{15}$

$K < 0 \Longrightarrow \mu = \pm \sqrt{-K}; \tag{16}$

it is well-known that cases (14) and (16) yield general solutions of the form

$f(t) = c_+ e^{i \sqrt{K}t} + c_- e^{-i\sqrt{K}t}$ $= c_+(\cos (\sqrt{K}t) + i\sin(\sqrt{K}t)) + c_-(\cos (\sqrt{K}t) - i\sin(\sqrt{K}t))$ $= (c_+ + c_-)\cos(\sqrt{K}t) + i(c_+ - c_-) \sin (\sqrt{K}t), \tag{17}$

and

$f(t) = c_+ e^{\sqrt{-K} t} + c_- e^{-\sqrt{-K}t} = (c_+ + c_-)\cosh(\sqrt{-K}t) +(c_+ - c_-)\sinh(\sqrt{-K}t) ; \tag{18}$

in both cases (17), (18) the condition $f(0) = 0$ implies

$f(0) = c_+ + c_- = 0 \Longrightarrow c_- = -c_+, \tag{19}$

yielding

$f(t) = 2ic_+ \sin(\sqrt{K}t), \; f'(t) = 2ic_+ \sqrt{K} \cos (\sqrt{K}t), \tag{20}$

and

$f(t) = 2c_+ \cosh \sqrt{-K}t, \; f'(t) = 2c_+ \sqrt{-K} \cosh(\sqrt{-K}t); \tag{21}$

with $f'(0) = 1$, (20) and (21) determine that

$2ic_+ \sqrt{K} = 1 \Longrightarrow c_+ = \dfrac{-i}{2\sqrt{K}}, \tag {22}$

and

$2c_+ \sqrt{-K} = 1 \Longrightarrow c_+ = \dfrac{1}{2 \sqrt{-K}}, \tag{23}$

respectively; thus we see at last that

$K > 0 \Longrightarrow f(t) = \dfrac{\sin (\sqrt{K}t)}{\sqrt{K}}, \tag{24}$

$K < 0 \Longrightarrow f(t) = \dfrac{\sinh (\sqrt{-K}t)}{\sqrt{-K}}. \tag{25}$

As for (15), in the event that $K = 0$, we have found that $\mu = 0$ is a double root of (13), which reduces to $\mu^2 = 0$, corresponding to the $K = 0$ case of (9),

$f''(t) = 0. \tag{26}$

When the polynomial associated to a second-degree ordinary differential equation has a double root it must be of the form

$x^2 - 2\mu x + \mu^2 = (x - \mu)^2 = 0; \tag{27}$

in this case the ODE from which (27) arises by means of the substitution $y = e^{\mu t}$ is

$\ddot y - 2\mu \dot y + \mu^2 y = 0, \tag{28}$

which may also of course be written

$\left ( \dfrac{d}{dt} - \mu \right )^2 y = \left ( \dfrac{d}{dt} - \mu \right )\left ( \dfrac{d}{dt} - \mu \right ) y = \ddot y - 2\mu \dot y + \mu^2 y = 0; \tag{29}$

setting $z = \dot y - \mu y$ we see that this implies

$\dot z - \mu z = 0, \tag{30}$

whence

$z(t) = c_1 e^{\mu t}, \tag{31}$

which is clearly one solution to (29); then with

$\dot y - \mu y = z(t) = c_1 e^{\mu t}, \tag{32}$

we find

$y(t) = (c_0 + c_1 t)e^{\mu t}; \tag{33}$

now when $\mu = 0$ this reduces to

$y(t) = c_0 + c_1 t, \tag{34}$

clearly consistent with (26); in fact, from these last considerations we conclude that, (26) being the $\mu = 0$ case of (9),

$f(t) = c_0 + c_1 t; \tag{35}$

with the initial conditions (10)-(11) we see that

$c_0 = f(0) = 0, \; c_1 = f'(0) = 1, \tag{36}$

whence

$K = 0 \Longrightarrow f(t) = t; \tag{37}$

we may now combine (24), (25), (37) together with (6) to see that

$K > 0 \Longrightarrow J(t) = \dfrac{\sin (\sqrt{K}t)}{\sqrt{K}} \omega(t), \tag{38}$

$K = 0 \Longrightarrow J(t) = t \omega(t), \tag{39}$

$K < 0 \Longrightarrow J(t) = \dfrac{\sinh (\sqrt{-K}t)}{\sqrt{-K}} \omega(t). \tag{40}$

In closing, a few final words on the process of "solving" equation (9), which seems to be our OP Huruji Ionut's prime concern: what we have done here, and which is often done, is to make a well-motivated, well-informed guess that the solution may be expressed in exponential form as in (12), and then use the given equation (9) to resolve the values that the parameter $\mu$ may take under various circumstances; in this sense, we are not so much deriving solutions as we are verifying them. A key theoretical fact which plays an essential role in this endeavor is that a linear equation of order $n$ has precisely $n$ linearly independent solutions; this information allows us to affirm that we have indeed found all solutions to a given linear ordinary differential equation. Of course, such solutions may be built up from scratch via power series or other methods without resorting to "guessing", but we have saved ourselves a great many calculations by our ability to postulate and then verify functions which hypothetically satisfy our equations. And in the study of differential equations, especially non-linear differential equations, guessing is often the only means at our disposal.

One good guess is worth a thousand exploratory computations.