Showing $\frac{\sinh t}{\sinh T}$ extremises the functional $S[x]=\int_0^T\big(\dot x(t)^2+x(t)^2\big)\ \mathrm dt$ with given boundary conditions

105 Views Asked by At

Show that $t\mapsto\frac{\sinh t}{\sinh T}$ extremises the functional $S[x] = \int_0^T \left( \dot{x}(t)^2 + x(t)^2 \right) \ \mathrm dt$ with boundary conditions $x(0) = 0$ and $ x(T) = 1$.

I can show that $\frac{\sinh t}{\sinh T}$ is one solution to the resultant ODE, $\ddot{x} = x$ after simplifying the functional in the title, but I can't think of a way to prove that it is an extremal solution (specifically, it is supposed to be a minimum, according to the solutions manual [1]).

How may I proceed?


[1]This problem is 3-5 of James Hartle's Gravity: A Gentle Introduction to Einstein's General Relativity.

1

There are 1 best solutions below

0
On

Let $ X $ be the real vector space that is the domain of $ S $ (it's usual to consider the space of twice continuously differentiable functions in variational problems, but you may take a space other than $ C ^ 2 ( [ 0 , T ] ) $ for your own intended application). If we define $ \langle . , . \rangle : X \times X \to \mathbb R $ with $$ \langle x , y \rangle = \int _ 0 ^ T \big( \dot x ( t ) \dot y ( t ) + x ( t ) y ( t ) \big) \ \mathrm d t $$ for all $ x , y \in X $, it's rather straightforward to check that it gives an inner product on $ X $, which induces the norm $ \| x \| = \sqrt { S [ x ] } $ for all $ x \in X $. This observation is not necessary, but makes everything simpler, and gives more compact notations.

Now, note that $$ S [ x + h ] = \| x + h \| ^ 2 = \| x \| ^ 2 + 2 \langle x , h \rangle + \| h \| ^ 2 \tag 0 \label 0 $$ for all $ x , h \in X $. Considering $ x _ * \in X $ given with $ x _ * ( t ) = \frac { \sinh t } { \sinh T } $ for all $ t \in [ 0 , T ] $, and taking any $ h \in X $ with $ h ( 0 ) = h ( T ) = 0 $, we can see that $$ \langle x _ * , h \rangle = \int _ 0 ^ T \left( \frac { \cosh t } { \sinh T } \dot h ( t ) + \frac { \sinh t } { \sinh T } h ( t ) \right) \ \mathrm d t = \\ \frac 1 { \sinh T } \int _ 0 ^ T \frac { \mathrm d } { \mathrm d t } \big( ( \cosh t ) h ( t ) \big) \ \mathrm d t = \frac { ( \cosh T ) h ( T ) - ( \cosh 0 ) h ( 0 ) } { \sinh T } = 0 \text . \tag 1 \label 1 $$ Together, \eqref{0} and \eqref{1} show that for all $ h \in X $ with $ h ( 0 ) = h ( T ) = 0 $, $$ S [ x _ * + h ] = S [ x _ * ] + \| h \| ^ 2 \ge S [ x _ * ] \text . \tag 2 \label 2 $$ As $ x _ * ( 0 ) = 0 $ and $ x _ * ( T ) = 1 $, \eqref{2} can be rephrased: for any $ x \in X $ with $ x ( 0 ) = 0 $ and $ x ( T ) = 1 $, $ S [ x _ * ] \le S [ x ] $. This means that $ x _ * $ is the global minimizer of $ S $.


The above argument was very specific to this problem, and not generalizable to a wide class of variational problems. If you're looking for more general methods, here are some useful tools.

Remember that:

A function $ f : \mathbb R \to \mathbb R $ has a local minimum at a point $ x _ * \in \mathbb R $ when $ \left. \frac { \mathrm d } { \mathrm d \epsilon } f ( x _ * + \epsilon ) \right| _ { \epsilon = 0 } = 0 $ and $ \left. \frac { \mathrm d ^ 2 } { \mathrm d \epsilon ^ 2 } f ( x _ * + \epsilon ) \right| _ { \epsilon = 0 } > 0 $.

There is the following generalization of this theorem to the case of functionals (see here for references):

A functional $ S : X \to \mathbb R $ has a local minimum at $ x _ * \in X $ if its first variation vanishes at $ x _ * $, and its second variation is strongly positive at $ x _ * $.

For any nonnegative integer $ k $, the $ k $-th variation $ \delta ^ k S [ x , h ] $ is defined to be the quadratic part of the change of value of $ S $ at $ x $ in the direction of $ h $; i.e. $$ \delta ^ k S [ x , h ] = \left. \frac { \mathrm d ^ k } { \mathrm d \epsilon ^ k } S [ x + \epsilon h ] \right| _ { \epsilon = 0 } \text . $$ Strong positivity of the second variation at $ x $ is the property of existence of a positive constant $ a \in \mathbb R $ such that $ \delta ^ 2 S [ x , h ] \ge a \| h \| ^ 2 $ for all $ h $.

The part related to the first variation being zero is the part you already know of; it results in the Euler-Lagrange equations, and you find candidates for $ x _ * $. The part related to the second variation was fairly trivial in the case of your problem; we had $$ S [ x + \epsilon h ] - S [ x ] = 2 \epsilon \langle x , h \rangle + \epsilon ^ 2 \| h \| ^ 2 \text , $$ and thus $ \delta S [ x , h ] = 2 \langle x , h \rangle $ and $ \delta ^ 2 S [ x , h ] = 2 \| h \| ^ 2 $, which shows that the second variation is strongly positive at any $ x \in X $. This is in fact the result of the clever choice of the norm on $ X $. We could choose other norms on $ X $, for example $$ \| x \| _ 2 = \int _ 0 ^ T x ( t ) ^ 2 \ \mathrm d t \text . $$ With some more effort, this could also work, since we know that $ \| \dot x \| _ 2 \ge \frac 2 L \| x \| _ 2 $ (see here). What I'm trying to point out is in case the norm is not given, choosing it wisely will help a lot in solving the problem.

For showing that a given local minimum is in fact a global minimum, let's again remember that:

If a function $ f : \mathbb R \to \mathbb R $ is convex and $ x _ * \in \mathbb R $ is a local minimizer of $ f $, then it is also a global minimizer of $ f $.

This same statement is also true for a functional $ S : X \to \mathbb R $:

If a functional $ S : X \to \mathbb R $ is convex and $ x _ * \in X $ is a local minimizer of $ S $, then it is also a global minimizer of $ S $.

For example, in the case of your problem, you could note that for any $ x , y \in X $ and any $ \alpha \in [ 0 , 1 ] $, the properties of norm imply that $$ \| ( 1 - \alpha ) x + \alpha y \| \le \| ( 1 - \alpha ) x \| + \| \alpha y \| = ( 1 - \alpha ) \| x \| + \alpha \| y \| \text , $$ and thus the norm itself is a convex functional. As $ \| x \| $ and $ S [ x ] = \| x \| ^ 2 $ have the same local/global minima, $ x _ * $ is a local minimizer of the norm (on the set of functions $ x : [ 0 , T ] \to \mathbb R $ with $ x ( 0 ) = 0 $ and $ x ( T ) = 1 $). By convexity, $ x _ * $ is the global minimizer of $ \| . \| $, and thus of $ S $ as well.

The property of convexity can be weakened, so that the theorem works for a larger class of functionals. Some examples of such properties are being semilocally convex or star-shaped. You might like to take a look at the following article by Ewing for such examples:

Ewing, George M., Sufficient conditions for global minima of suitably convex functionals from variational and control theory, SIAM Rev. 19, 202-220 (1977). ZBL0361.49011.