Let $U$ be an open subset of $\mathbb R^{n+m}=\mathbb R^n\times \mathbb R^m$ and $g:U\to\mathbb R^m$ a $C^1$ function.
Let $p=(x_0,y_0)\in U$ be a point such that $$g'(p):\mathbb R^{n+m}\to \mathbb R^m\text{ is surjective}\tag{$*$}$$
The book I'm reading says
(A) Without loss of generality, we can assume that the restriction $g'(p)\big |_{\{0\}\times \mathbb R^m}$ is an isomorphism.
Question: Why there is no loss of generality in this assumption? (I'm interested in the explicit proof that the general case can be reduced to this one.)
Context: The book proves the Inverse Function Theorem, in which the condition "$g'(p)\big |_{\{0\}\times \mathbb R^m}$ is an isomorphism" is an assumption, and then proves the Lagrange Multiplier Method as an application.
Because of (A), which appears in the proof of the Lagrange Multiplier Method, the Inverse Function Theorem implies the following:
(B) There are a neighborhood $A\subset \mathbb R^n$ of $x_0$, a neighborhood $V\subset U$ of $(x_0,y_0)=p$ and a $C^1$ function $\xi:A\to\mathbb R^m$ such that $$(x,\xi(x))\in V\quad\text{and}\quad g(x,\xi(x))=g(p),\qquad \forall\ x\in A$$
My try: I know that, from $(*)$, there exists an $m$-dimenisonal subspace $X$ of $\mathbb R^{n+m}$ such that the restriction $g'(p)\big |_{X}$ is an isomorphism. So, my question is: how to rigorously pass from $X$ to $\{0\}\times \mathbb R^m$?
Well, I know that there exists a bijective linear map $h:\{0\}\times \mathbb R^{m}\to X$. Let $H:\mathbb R^{n+m}\to\mathbb R^{n+m}$ be a bijective linear extension of $h$. Take $x_p=(x_p^1,x_p^2)\in \mathbb R^{n+m}$ such that $H(x_p)=p$. Define $\tilde{g}:H^{-1}(U)\to\mathbb R^{m}$ by $\tilde{g}(y)=g(H(y))$.
Then, $\tilde{g}'(p)\big |_{\{0\}\times \mathbb R^m}$ is an isomorphism. Is it correct? If so:
(C) There are a neighborhood $\tilde{A}\subset \mathbb R^n$ of $x_p^1$, a neighborhood $\tilde{V}\subset H^{-1}( U)$ of $(x_p^1,x_2^p)=x_p$ and a $C^1$ function $\tilde{\xi}:\tilde{A}\to\mathbb R^m$ such that $$(y,\tilde{\xi}(y))\in \tilde{V}\quad\text{and}\quad \tilde{g}(y,\tilde{\xi}(y))=\tilde{g}(x_p),\qquad \forall\ y\in \tilde{A}.$$
To finish my argumment, I have to obtain (B) from (C). Is it possible? I suspect that we should define $V=F(\tilde{V})$. But how to define $A$ and $\xi$ form $\tilde{A}$ and $\tilde{\xi}$?
[It seems I got a answer for my own question]
Why there is no loss of generality in this assumption?
Because otherwise we change coordinates and apply exactly the same argument (at the end we only need an extra calculation step to recover the original desired result).
Here is the explicit proof (where the notations are as in the post, with $(g_1,...,g_m)=g$):
The aim of the argument is to prove the Lagrange Multiplier Method. Thus, we want to prove that $$\nabla f(p)=\lambda_1\nabla g_1(p)+\cdots+\lambda_m \nabla g_m(p).\tag{1}$$
Define $\tilde{f}:H^{-1}(U)\to\mathbb R$ by $\tilde{f}(y):= f(H(y))$. Then $\tilde{f}$, $\tilde{g}$ and $x_p$ satisfy the same hypotheses as $f$, $g$ and $p$. Thus, exactly the same argument leads us to $$\nabla \tilde{f}(x_p)=\lambda_1\nabla \tilde{g}_1(x_p)+\cdots+\lambda_m \nabla \tilde{g}_m(x_p).\tag{2}$$
In this case, no assumption like (A) is needed because $\tilde{g}'(x_p)\big |_{\{0\}\times \mathbb R^m}$ is indeed a isomorphism.
Note that, for each $i=1,...,n+m$,
$$\partial_i\tilde{f}(x_p)=\partial_i(f\circ H)(x_p)=\sum_{k=1}^{n+m}\partial_kf(H(x_p))\partial_i H_k(x_p)=\langle \nabla f(p),H(e_i)\rangle$$ and, analogously, $$\partial_i\tilde{g}_j(x_p)=\partial_i(g\circ H)_j(x_p)=\partial_i(g_j\circ H)(x_p)=\langle \nabla g_j(p),H(e_i)\rangle \qquad (j=1,...,m)$$
Thus, from $(2)$, $$\langle \nabla f(p)-\lambda_1\nabla g_1(p)-\cdots-\lambda_m\nabla g_m(p),H(e_i)\rangle=0,\qquad \forall\ i=1,...,n+m$$
which implies $(1)$.