Local diffeomorphism from $\mathbb{R}^m$ onto itself is a covering map.

396 Views Asked by At

The following problem is from one of my real analysis textbooks:

Let $f: \mathbb{R}^m \to \mathbb{R}^m$ be a continuously differentiable map such that $|d_xf(v)| \geq \alpha |v|$ for every $x, v \in \mathbb{R}^m$, where $\alpha$ is some strictly positive constant. Show that $f$ is a diffeomorphism of $\mathbb{R}^m$ onto itself. In particular, show that every $x, y \in \mathbb{R}^m$ satisfy $|f(x)-f(y)| \geq \alpha|x-y|$.

As suggestion for approaching the problem the author says one should show that for every line segment $[f(a),b] \subset \mathbb{R}^m$ there is a path $\lambda$ in $\mathbb{R}^m$ beginning at $a$ such that $f(\lambda(t)) = (1-t)f(a) + tb$.

Now, I understand he is asking me to use that any line segment in $\mathbb{R}^m$ can be lifted to another path in $\mathbb{R}^m$, but I can't see why under these conditions $f$ is a covering map. By the Inverse Function Theorem $f$ is a local diffeomorphism, but I know that's not enough to conclude it's also a covering. Is there another way to construct such liftings of line segments without $f$ being a covering map, or it is one indeed and I am just missing something?

1

There are 1 best solutions below

1
On BEST ANSWER

If $f$ is a local diffeomorphism, we can follow the same lifting construction as if it was a covering map. The only problem we could encounter is that the lifting goes to infinity in finite time. This is excluded by the additional assumption on $f$, i.e. $|(D f(x))^{-1}| \le \alpha^{-1}$.

Let us find continuously differentiable $\lambda \colon [0,1] \to \mathbb R^d$ such that $$ \lambda(0) = a, \tag{1} $$ $$ f(\lambda(t)) = (1-t)f(a) + tb. \tag{2} $$ Differentiating (2) yields $Df(\lambda(t)) \cdot \lambda'(t) = b-f(a)$, equivalently $$ \lambda'(t) = (Df(\lambda(t)))^{-1} \cdot (b-f(a)). $$ This is an equation of the form $\lambda'(t) = F(\lambda(t))$ with $F$ continuous and bounded. Therefore there is a solution on $[0,1]$ with the initial condition (1). Reversing our reasoning, we see that the derivative of the difference between both sides of (2) is zero. Since (2) is satisfied for $t=0$, it is satisfied for all $t \in [0,1]$ (and most importantly, for $t=1$).