Using the Mean Value Inequality to prove the Inverse Function Theorem

98 Views Asked by At

I am currently tackling a problem from an undergraduate level course on multivariable calculus as part of a broader effort to prove the Inverse Function Theorem.

Setup: Consider a function $f \in C^1( \mathbb{B}, \mathbb{R}^k)$ where there exists some positive value $\beta$ such that: $$\lvert Df(0)h \rvert \ge \big{(}\beta \cdot \lvert h \rvert \big{)} $$ for all $h \in \mathbb{R}^n$ and another function $P: \mathbb{B} \rightarrow \mathbb{R}^k$ where: $$P(x) = f(x) - Df(0)x$$

This setup (and the following problem below) derive themselves from a subsection of a broader problem. In its entirety, this problem addresses the proof of the Inverse Function Theorem by breaking it down into multiple exercises.

Question: Prove that there exists $\delta >0$ such that: $$\lvert P(x) - P(y) \rvert \le \big{(}0.5 \cdot \beta \cdot \lvert x-y \rvert \big{)}$$ for all $x,y \in \mathbb{B}_{\delta}$

I believe that we need to make use of the generalisation of the mean value inequality, although I'm not certain if this is necessary or indeed if it is even possible using this approach.

As mentioned above, I believe that the mean value inequality is likely to be helpful, although I am equally interested in solutions which do not reference it. I am more interested in understanding how we can prove the desired result than doing so in any one particular way.

I would be grateful for any guidance.

1

There are 1 best solutions below

2
On BEST ANSWER

I may be missing something here, but imho there is no need to invoke the mean value theorem for this. Have a look at \begin{eqnarray} ||P(x)- P(y)|| &=& ||f(x)-f(y) - Df(0)(x-y)|| \\ &=& ||f(x)- f(y) -Df(x)(x-y) + (Df(x)- Df(0))(x-y)|| \\ &\le & ||f(x)- f(y) -Df(x)(x-y)|| + ||(Df(x)- Df(0))(x-y)|| \\ &=& ||o(||x-y||)|| + ||(Df(x)-Df(0))(x-y)|| \end{eqnarray} (first line is the definition of $P$, second line we add $0$, third line we apply the triangle inequality, forth line the definition of differentiability). Now choose $\varepsilon = \beta/2$. If you now choose $\delta>0$ small enough you can get $$||o(x-y)||\le \varepsilon ||x-y||$$ by the the well known properties of $o$, and by continuity of $Df$ $$||(DF(x)-DF(0))(x-y)||\le ||(DF(x)-DF(0))||\,||(x-y)||\le \varepsilon ||x-y||$$

The inequality for $|Df(0)h|$ is not needed in this reasoning...maybe I'm really missing something?

EDIT: yes, I've been missing something, the estimate $$||o(x-y)||\le \varepsilon ||x-y||$$ may depend on $x$, as $o$ is chosen only in dependence of $x$.

One way out of this is to not replace the difference in the third line with $o(||x-y||)$, but instead write $f(x)-f(y) = Df(z)(x-y)$ with soe $z$ on the line from $x$ to $y$ (which is a variant of the mean value theorem) and use continuity of $Df$ once more...Since only guidance (and no complete solution) was asked for, I leave this as a remaining exercise.

Still, the inequality for $|Df(0)h|$ is not needed.