A basic theorem on Morse Theory: diffeomorphism between two manifolds with boundary induced by a map on a manifold

491 Views Asked by At

I'm trying to understand the following theorem from "Morse Theory" by John Milnor:

Theorem 3.1

Let f be a smooth real valued function on a manifold M.
Let $a<b$ and suppose that the set $f^{-1}[a,b]$ is compact, and contains no crtitical points of f. Then $M^{a}$ is diffeomorphic to $M^{b}$, where $M^{a}=f^{-1}(-\infty,a]$ ed $M^{b}=f^{-1}(-\infty,b]$

Now I'll write the proof I found in the book:

Choose a Riemannian metric M; and let $<X,Y>$ denote the inner product of two tangent vectors, as determined by this metric. The gradient of f is the vector field $grad f$ on M which is characterized by the identity

$<X,grad f>=X(f)$

for any vector field X.

This vector field f vanishes precisely at the critical points of f. Let $\rho:M\rightarrow R$ be a smooth function which is equal to $\frac{1}{<grad f, grad f>}$ throughout the compact set $f^{-1}[a,b]$; and which vanisches outside of a compact neighborhood of this set.

Then the vector field X, defined by $X(q)$=$\rho (q) grad f(q)$ generates a 1-parameter group of diffeomorphisms of M, that is to say a $C^{\infty}$ map $\phi:R\times M\rightarrow M$ such that

- for each t$\in R$ the map $\phi_{t}:M\rightarrow M$ defined by $\phi_{t}(q)=\phi(t,q)$ is a diffeomorphism of M onto itself

- for all $t,s \in R$ we have $\phi_{t+s}=\phi_{t}\circ \phi_{s}$

For fixed $q\in M$ consider the function $t\rightarrow f(\phi_{t}(q))$. If $\phi_{t}(q)$ lies in the set $f^{-1}[a,b]$, then

$\frac{df(\phi_{t}(q))}{dt}=<\frac{d(\phi_{t}(q))}{dt}(f),grad f>=<X,grad f>= 1$

Thus the correspondence $t\rightarrow f(\phi_{t}(q)$ is linear with derivative 1 as long as $f(\phi_{t}(q))$ lies between a and b. Now consider the diffeomorphism $\phi_{b-a}:M\rightarrow M$. Clearly this carries $M^{a}$ diffeomorphically onto $M^{b}$.

Now let's come to the question:

It is not clear to me why the last sentence is true. I thought to show that the image of $M^{a}$ through the restriction of $\phi_{b-a}$ to $M^{a}$ is contained in $M^{b}$. In order to do that I proved that $f^{-1}(a)$ is sent to $f^{-1}(b)$ through $\phi_{b-a}$ by using the fact we proved"...Thus the correspondence $t\rightarrow f(\phi_{t}(q)$ is linear with derivative 1 as long as $f(\phi_{t}(q))$ lies between a and b.".

After that I got stuck.

Can someone give me some hints/suggestions to conclude the proof?

Thanks in advance.

1

There are 1 best solutions below

1
On BEST ANSWER

Let $x\in M^a$. Let's assume by contradiction that $\phi_{b-a}(x)\not \in M^b$, i.e. $f(\phi_{b-a}(x))>b$. By continuity there is a $t<b-a$ such that $f(\phi_{t}(x))=a$. Call $y=\phi_t(x)$, then $f(\phi_{b-a}(x))=f(\phi_{b-a-t}(y))=b-t<b$ which is a contradiction.