Proving that inverse of a smooth function is smooth

756 Views Asked by At

Suppose I have a smooth function $g: \mathbb{R}^n \to \mathbb{R}^t$ and write the variables as $(x,y)$ where $x \in \mathbb{R}^t$. Suppose the Jacobian matrix of $g(\cdot, y)$ is invertible at $y = 0$ for all $y \in B_1(0)$. Then by the inverse function theorem and a bit of work it follows that $g(\cdot, y)$ is invertible on $B_{\epsilon}(0)$ for all $y \in B_{\epsilon}(0)$ for some $\epsilon > 0$. Let $G(u, y) = g(\cdot, y)^{-1}(u)$

It follows from the inverse function theorem that for a fixed $y$, $G(u, y)$ is smooth in $u$. I am sure that $G$ is a smooth function in $(u,y)$ also, where $$ G: B_{\epsilon'}(g(0,0) \times \{ 0 \}) \to \mathbb{R}^t $$ and $\epsilon' > 0$ is sufficiently small. We suppose such $\epsilon'$ can be found such that $G$ is well-defined on this open set.

I was not seeing how to prove $G$ is smooth... Any comments, explanation is appreciated!

1

There are 1 best solutions below

1
On

Not sure exactly what you are after but here is a go: The implicit (or inverse) function theorem may be thought of as a non-linear perturbation of a corresponding linear problem. If you grasp how the linear problem works then you should get a good idea of how to arrive at the corresponding non-linear solution.

In your setup $u=g(x,y)$ is a $C^k$ map ($k\geq 1$) from ${\Bbb R}^t\times {\Bbb R}^{n-t}$ with $g(0,0)=0$ and the assumption that $A=\partial_x g_{|(0,0)} \in GL_t({\Bbb R})$ is an invertible matrix. Let $B=\partial_y g_{|(0,0)} \in M_{t, n-t}({\Bbb R})$ be the corresponding partial derivative w.r.t. $y$. Suppose for a moment that $g$ was linear. Then the implicit function theorem amounts to solving the equation $u = Ax + By$ for $x$, which is easy enough since $A$ was invertible: $$x = A^{-1} u - A^{-1} B y$$ The linear part corresponds to an order 1 Taylor expansion. Now, $g$ in general has some non-linear part as well so we should really solve $u=Ax+By +\delta g(x,y)$ for $x$. Trying to do as before and isolating $x$ from the linear term, we get: $$ x = A^{-1} u - A^{-1} By - A^{-1} \delta g(x,y) $$ where unfortunately $x$ appears also on the RHS. Nevertheless, without getting discouraged, we try to use this equation to get a solution by bootstrapping. You start with a reasonable guess, e.g. the one we had above from the linear solution and you plug that into $x$ on the RHS to get a better guess. Iterating the procedure and taking limits you hopefully wind up with a fixed point $x$ which of course depends upon $u$ and $y$, so $x=G(u,y)$. For this to work you need reasonable conditions on the RHS when iterating. In particular, the function $$ x\mapsto \Gamma(x) = A^{-1} u - A^{-1} By - A^{-1} \delta g(x,y)$$

should be a uniformly contracting map on a neighborhood of $x=0$. Magically, it turns out to be enough to assume that $g$ is $C^1$. Then the non-linear perturbation is "small" enough for the iteration scheme to work. A harder part of the proof is to show that the fixed point $G(u,y)$ is then also $C^1$ in $u$ and $y$. The proof typically takes a couple of pages in textbooks (one of my favorites is Serge Lang: Real and Functional Analysis, which is, however, a bit abstract). Now, once you accept that $G$ exists and is at least $C^1$ then formulae for derivatives and further regularity actually comes almost for free, again by a kind of bootstrapping argument:

In a neighborhood of the origin we have the following identity between $C^1$ functions: $$ u = g(x,y) = g(G(u,y),y)$$ so taking derivatives with respect to $u$ you get ${\bf 1}_t =\partial_x g (G(u,y),y) \ \partial_u G(u,y)$ from which $$ \partial_u G(u,y) = (\partial_x g(G(u,y))^{-1}. $$ If $g$ is $C^2$, then the RHS is $C^1$ (being compositions of $C^1$ maps) so $\partial_u G(u,y)$ must be $C^1$ and in a similar way you get that so is $\partial_y G(u,y)$. But if the partial derivatives of $G$ are $C^1$ then $G$ itself must be $C^2$. You may iterate this argument we see that $G$ is as smooth as $g$.