In the book Riemannian Geometry by Gallot, Hulin and Lafontaine, a proposition which characterises equivalent definitions of submanifolds is given as follows:
1.3 Proposition The following are equivalent:
i) $M$ is a $C^p$ submanifold of dimension $n$ of $\mathbb{R}^{n+k}$.
ii) For any $x$ in $M$, there exist open neighbourhoods $U$ and $V$ of $x$ and $0$ in $\mathbb{R}^{n+k}$ respectively, and a $C^p$ diffeomorphism $f : U\to V$ such that $f(U \cap M)=V \cap (R^n \times \{0\})$.
iii) For any $x$ in $M$, there exist a neighbourhood $U$ of $x$ in $\mathbb{R}^{n+k}$, a neighbourhood $\Omega$ of $0$ in $\mathbb{R}^n$, and a $C^p$ map $g: \Omega \to \mathbb{R}^{n+k}$ such that $( \Omega, g)$ is a local parametrization of $M \cap U$ around $x$ (that is $g$ is an homeomorphism from $\Omega$ onto $M \cap U$ and $g'(0)$ is injective).
I am trying to show that iii) implies ii) and I think that I am nearly there except I am having trouble with the following detail. I will first summarise my problem and then fill in the details.
In brief, my main problem in going from iii) to ii) is that to make my proof work I seemed to also require that $g(0)=x$. (Whereas in contrast the authors have just required that $g$ is an homeomorphism from $\Omega$ onto $M \cap U$ and $g'(0)$ is injective).
The details of what I attempted are as follows:
(Attempted) Proof that iii) $\implies$ ii).
Fix $x \in M$ and let $M \subseteq \mathbb{R}^{n+k}$ satisfy the conditions of iii). Therefore we have a neighbourhood $W$ of $x$ in $\mathbb{R}^{n+k}$, a neighbourhood $\Omega$ of $0$ in $\mathbb{R}^n$, and a $C^p$ map $g: \Omega \to \mathbb{R}^{n+k}$ such that $g$ is an homeomorphism from $\Omega$ onto $M \cap W$ and $g'(0)$ is injective.
Since the Jacobian matrix $g'(0)$ or $Dg$(at $0$), which has $n+k$ rows and $n$ columns, is injective (in this case rank $n$), that means that all of its $n$ columns are linearly independent. Therefore we could 'complete' or 'fill out' this matrix up to a full square matrix which has rank $n+k$ (i.e. nonsingular). The filled out matrix is:
\begin{bmatrix} \frac{\partial g_1}{\partial x_1} & \frac{\partial g_1}{\partial x_2} & \dots \frac{\partial g_1}{\partial x_n} & a_{11} & \dots & a_{1k} \\ \frac{\partial g_2}{\partial x_1} & \frac{\partial g_2}{\partial x_2} & \dots \frac{\partial g_2}{\partial x_n} & a_{21} & \dots & a_{2k} \\ \frac{\partial g_3}{\partial x_1} & \frac{\partial g_3}{\partial x_2} & \dots \frac{\partial g_3}{\partial x_n} & a_{31} & \dots & a_{3k} \\ \vdots & \vdots & \vdots &\vdots & \dots & \vdots \\ \frac{\partial g_n}{\partial x_1} & \frac{\partial g_n}{\partial x_2} & \dots \frac{\partial g_n}{\partial x_n} & a_{n1} & \dots & a_{nk} \\ \vdots & \vdots & \vdots &\vdots & \dots & \vdots \\ \frac{\partial g_{n+k}}{\partial x_1} & \frac{\partial g_{n+k}}{\partial x_2} & \dots \frac{\partial g_{n+k}}{\partial x_n} & a_{(n+k)1} & \dots & a_{(n+k)k} \end{bmatrix}
With this in mind we can define a new function $h : \Omega \times \mathbb{R}^k \to \mathbb{R}^{n+k}$ by:
$h_1=g_1(x_1,x_2,...,x_n) + a_{11} x_{n+1} + a_{12} x_{n+2} + ... + a_{1k} x_{n+k}$ etc.
Then $Dh$ at $0$ is just the matrix above. Therefore, by the inverse function theorem, there is a neighbourhood $V$ of $0$ such that $h$ carries $V$ in a one to one fashion onto an open set $U$ of $\mathbb{R}^{n+k}$ and we can guarantee that $h(0)$ (which is equal to $g(0)$) is in $U$.
This is where I think my problem arises. Although the original $W$ contained $x$, I can't seem to guarantee that $U$ contains $x$ because the whole process to show that $U$ exists has relied upon the inverse function theorem, which could return a smaller open set than $W$.
Therefore I am not certain I am on the correct track, but apart from this little detail, if I could assume that $g(0)=x$, I think this would all work.
Help would be much appreciated as I am attempting to self-learn with no mathematical contacts at the moment.
Usually one requires in (ii) that $f(x) = 0$ and in (iii) that $g(0) = x$. However, one can also omit these requirements, but in that case it is in my opinion misleading to require that $V$ is a neighborhood of $0$ in $\mathbb R^{n+k}$ and that $\Omega$ is a neigbborhood of $0$ in $\mathbb R^n$. In fact, this is completely irrelevant unless we want to have for some reason that $f(x) = 0$ and $g(0) = x$.
Let us consider (ii) and (iii) as they are.
In (ii) we know that $f(x) \in \mathbb R^n \times \{0\}$, but we do not know that $f(x) = 0$. Whether $0 \in V$ is completely irrelavant. If we want to have that, we can make a translation on $\mathbb R^{n+k}$ to achieve it: The map $T_{-f(x)}: \mathbb R^{n+k} \to \mathbb R^{n+k}, T_{-f(x)}(y) = y - f(x)$ is a diffeomorphism and $f ' : U \stackrel{f}{\to} V \stackrel{T_{-f(x)}}{\to} T_{-f(x)}(V)$ has all desired properties.
The situation in (iii) is similar.
This means that you may assume that $f(x) = 0$ and $g(0) = x$ which makes your proof work.
Edited:
I have to make a correction: (iii) is not similar. I thought that a local parametrization of $M∩U$ around $x$ should have $g'(y)$ injective for all $y \in \Omega$, but this is not required. In fact we need that $g'(g^{-1}(x))$ is injective, otherwise the given assumptions have no chance to assure the existence of a (small) neighborhood of $g^{-1}(x)$ which is mapped diffeomorphically onto a (small) neighborhood of $x$ in $M \cap U$. But again, if we assume $g'(g^{-1}(x)) \ne 0$, then a translation allows to assume w.l.o.g. $g^{-1}(x) = 0$.