I am struggling with an inductive proof of the implicit function theorem, concretely with the final part of construction of a function, up to this final point everything is perfectly clear to me. First the following is known to be true:
Theorem 1: Let $F : G \subseteq \mathbb R^{n+1} \to \mathbb R$, $G \ne \emptyset$, $F \in C^1(G)$, $G$ open. For $x^0 \in \mathbb R^n, y^0 \in \mathbb R$ let $$ F(x^0, y^0) = 0 \quad \mbox{ and } \quad F_y(x^0, y^0) \ne 0. $$ Then there exists some neighborhood $U(x^0) \subseteq \mathbb R^n$ and a function $f : U(x^0) \to \mathbb R$ such that $$ y = f(x_1, \ldots, x_n), \qquad y_0 = f(x_1^0, \ldots, x_n^0) $$ and $$ F(x, f(x)) = 0 $$ and $f \in C^1(U(x^0))$.
Now the Implicit Function Theorem reads as
Theorem 2 (Implicit Function Theorem): Let $F : G \subseteq \mathbb R^{n+m} \to \mathbb R^m$, $G\ne \emptyset$, $F \in C^1(G)$, $G$ open. Also $(x^0, y^0) \in G$ with $x^0 \in \mathbb R^n, y^0 \in \mathbb R^m, F(x^0,y^0) = 0$ and $$ \det\left( \frac{\partial F}{\partial y} \right)_{y=y_0} \ne 0. $$ Then there exists neighborhoods $U(x^0) \subseteq \mathbb R^n, V(y^0) \subseteq \mathbb R^m$ and a function $g : U(x^0) \to V(x^0)$ such that $F(x, g(x)) = 0$ on $x \in U(x^0)$.
Proof: The proof proceeds by induction on $m$, if $m = 1$ it is Theorem 1, so we assume $m > 1$ and suppose (by induction hypothesis) the statement holds for $m-1$. Let $F : G \subseteq \mathbb R^{n+m} \to \mathbb R^n$ be given, then because of $$ \det\left( \frac{\partial F}{\partial y} \right)_{y=y_0} \ne 0 \quad \mbox{ or } \quad \det \begin{pmatrix} \frac{\partial F_1}{\partial y_1} & \frac{\partial F_1}{\partial y_2} & \cdots & \frac{\partial F_1}{\partial y_m} \\ \vdots \\ \frac{\partial F_m}{\partial y_1} & \frac{\partial F_m}{\partial y_2} & \cdots & \frac{\partial F_m}{\partial y_m} \end{pmatrix} \ne 0. $$ So we can suppose that every row contains an entry $\ne 0$, suppose w.l.o.g. that $\frac{\partial F_m}{\partial y_m} \ne 0$. By Theorem 1 we can solve (locally) for $y_m$, that means there is some neighborhood $U$ of $(x, y_1, \ldots, y_{m-1})$ and a function $$ y_m = \varphi(x, y_1, \ldots, y_{m-1}) $$ and $\varphi$ is continously differentiable with ($x \in \mathbb R^n$) $$ F_m(x, y_1, \ldots, y_{m-1}, \varphi(x, y_1, \ldots, y_{m-1})) = 0 $$ for all $(x,y_1,\ldots,y_{m-1}) \in U$. Now set $$ \Phi_i(x, y_1, \ldots, y_{m-1}) := F_i(x, y_1, \ldots, y_{m-1}, \varphi(x, y_1, \ldots, y_{m-1})) $$ for $i = 1, \ldots, m-1$. Then $\Phi : U \subseteq \mathbb R^{n+m-1} \to \mathbb R^{m-1}$ and $\Phi$ fulfills als prerequisites to apply the induction hypothesis (here the proof shows this, but I omit it because it is quite long and does not apply to my question). Then there exists neighborhoods $W \subseteq \mathbb R^n, V \subseteq \mathbb R^{m-1}$ and a function $g : W \to V$ with $$ \Phi(x, g(x)) = 0 $$ for all $x \in W$. Now set $h(x) = (g(x), \varphi(x,g(x))$, then we have $$ F(x, h(x)) = 0 $$ and the proof is finished. $\square$
My question is on the last part. Namely the construction of the function $h(x)$,
1) Because $h(x) = (g(x), \varphi(x, g(x))$ it must be the case that $(x, g(x) \in U$, because $\varphi : U \to \mathbb R$, but I do not see that this must be the case?
2) The same issue if I want to show that $F(x, h(x)) = 0$, if $i \ne m$ it is $$ F_i(x, h(x)) = \Phi_i(x, g(x)) $$ by definition, but for $$ F_m(x, h(x)) = F_m(x, g(x), \varphi(x,g(x)) = F_m(x, g_1(x), \ldots, g_{m-1}(x), \varphi(x, g_1(x), \ldots, g_{m-1}(x)) $$ and to conclude that this equals $0$ it also must be that $(x, g(x)) \in U$, but I do not see that this must be the case?
As a minor point, you forgot to write that the implicit function $g$ on Theorem 2 is of class $C^1$.
More important, the theorems 1 and 2 that you enunciated do not claim the (local) uniqueness of the implicit solution. This feature is important.
By reading your question I guess that you are trying to understand the inductive proof of the implicit function theorem given in ``The Implicit Function Theorem - History, Theory, and Applications'', pp.39-41, by S. G. Krantz and H. R. Parks.
Regarding your specific question, notice that $$0=\Phi_1(x,g(x))=F_1\Big(x,g_1(x),\ldots,g_{m-1}(x),\varphi\big(x,g_1(x),\ldots,g_{m-1}(x)\big)\Big).$$ Analogously for $F_2,\ldots,F_{m-1}$ and for $F_m$. Thus, $$h(x)=\Big(g_1(x),\ldots,g_{m-1}(x),\varphi\big(x,g_1(x),\ldots,g_{m-1}(x)\big)\Big).$$
One approach to understand an inductive proof is to reproduce it for n=1, n=2, and n=3. Since you already understood the case n=1, I suggest that you develop a complete proof for the case n=2 (this case should be easy). Then, try to develop a complete proof for the case n=3 (if you overcome this case then you are almost done). Finally, try the general case.
I also suggest that you take a look at the article ``The Implicit and the Inverse Function Theorems: Easy Proofs'', Real Analysis Exchange, Vol. 39(1), pp. 207-218 (2013/2014), by O. de Oliveira. You can find a preprint on the internet.
Best wishes,
Oswaldo