This is an old comprehensive exam question I've been working on
Let $\gamma_i:\mathbb{R}\rightarrow \mathbb{R}^3$ be $C^1$ curves for $i=1,2$ such that $\gamma_1(0)=\gamma_2(0)=p$. Assume $\gamma_1'(0)$ and $\gamma_2'(0)$ are linearly independent. Prove the existence of $C^1$ coordinates $(u,v,w)$ on a neighborhood of $p\in \mathbb{R}^3$ in which $\gamma_1, \gamma_2$ map to the $u$,$v$ coordinate axes, respectively
My thoughts are that this should be an Inverse Function Theorem or Implicit Function Theorem problem, but I'm having trouble with defining the correct map, and the coordinate axes part.
Here are my thoughts so far:
If $\gamma_1'(0)$ and $\gamma_2'(0)$ are linearly independent then $\gamma_1'(0)\times \gamma_2'(0) \neq 0$ so one coordinate should be nonzero, WLoG I'm going to assume it's the 3rd entry, so that if $\gamma_i = (\gamma_{1i},\gamma_{2i},\gamma_{3i})$. Then the matrix $$\begin{pmatrix} \gamma'_{11}(0)& \gamma'_{12}(0)\\ \gamma'_{21}(0)& \gamma'_{22}(0)\end{pmatrix}$$ Is invertible, and I thought to use the Inverse Function Theorem, but this requires a $C^1$ map $F:\mathbb{R}^n\rightarrow \mathbb{R}^n$ of the same dimension. I can't see to find a correct map which will do this and produce the above matrix. Then I thought about using the Implicit Function Theorem, which doesn't require the dimensions be the same. In this case I tried $$F(x,y) = \begin{pmatrix}\gamma_1(x,y)\\\gamma_2(x,y)\end{pmatrix}$$ which goes from $\mathbb{R}^2$ to $\mathbb{R}^6$, and again get a matrix $$DF_{1,2}(0,0)=\begin{pmatrix} \gamma'_{11}(0)& \gamma'_{12}(0)\\ \gamma'_{21}(0)& \gamma'_{22}(0)\end{pmatrix}$$ So using the Implicit Function Theorem we have open sets $0\in U$ and $0\in V$ such that there is a $C^1$ map $\phi:U\rightarrow V$ for which $$F(\phi(z),z) = p$$ But here I'm stuck. I'm unsure how to arrive at the conclusion that $\gamma_1,\gamma_2$ map to $u,v$, assuming I'm on the right track.
Both the inverse/implicit function theorem are equivalent, so it's a matter of taste which theorem one wishes to apply. I usually prefer the inverse function theorem, so that's how I'll write the hint.
Consider $F:\Bbb{R}^3\to\Bbb{R}^3$ defined as \begin{align} F(u,v,w)&:= p+ [\gamma_1(u)-p] + [\gamma_2(v)-p]+ w\xi, \end{align} where $\xi\in\Bbb{R}^3$ is any vector such that $\{\gamma_1'(0),\gamma_2'(0),\xi\}$ is a basis for $\Bbb{R}^3$ (for instance, you can take $\xi=\gamma_1'(0)\times \gamma_2'(0)$). Notice that this definition ensures $F(u,0,0)=\gamma_1(u)$ and $F(0,v,0)=\gamma_2(v)$, which is partly why I defined $F$ the way I did. How does this help? I leave it to you to fill in the remaining details.
As a fun exercise, think about how you can generalize the statement to higher dimensions, with more curves.