I am reading Taubes's book on differential geometry and am wondering about a proof. My apologies if this is simple, as I'm still grappling with the material. My question concerns material in chapter 8, page 83.
Embed $S^n$ into $\mathbb R^{n+1}$ as the set of points with $|x|=1$. Pulling back the standard metric on $\mathbb R$ gives a metric on $S^n$ called the round metric. Taubes asserts the geodesic equation for a curve $\gamma: \mathbb R \rightarrow S^n \subset\mathbb R^{n+1}$ with coordinates $(x^i(t))$ is given by $$\ddot x^j + x^j|\dot x|^2=0.$$
To show this, he introduces the map $y\rightarrow (y, (1-|y|^2)^{1/2})$ from $\mathbb R^n$ to $\mathbb R^n \times \mathbb R$ that embeds the ball of radius $1$ into $S^n$. Pulling back the round metric gives $$g_{ij} = \delta_{ij} + y_i y_j(1-|y|^2)^{-1}.$$ (This expression fixes a typo found in the book and pointed out here.)
Expanding in a power series and writing out the geodesic equation gives $$\ddot y+y_j|\dot y|^2 +O(|y|^2)=0.$$
Taubes asserts that since this matches the original equation to leading order in $y$, the claim is proved. Why is this? That is, why does it suffices to check that the equations agree to leading order? His justification, which I do not understand, is:
This agrees with what is written above to leading order in y. Since the metric and the sphere are invariant under rotations of $S^n$, as is the equation for $x$ above, this verifies the equation at all points.
Presumably the second sentence is just referring to the face that, by symmetry, it suffices to verify the equation for the given coordinate patch, but perhaps there is more I am missing.
I am also confused because the equation in $y$ is in $\mathbb R^n$, but the equation in $x$ is in $\mathbb R^{n+1}$. What is going on here?
Frankly, I don't understand this type of arguments, they look to me as if taken from a 19th century treatise. I think that the stupid but straightforward approach is best here, and not really difficult.
Assuming that the geodesic equation is $\ddot x ^i (t) + \sum \limits _{j, k}\Gamma ^i _{jk} \dot x ^j (t) \dot x ^k (t) = 0$ and that $\Gamma ^i _{jk} = \frac 1 2 \sum \limits _a g^{ia} ( g_{aj, k} + g_{ak,j} - g_{jk,a} )$ (the comma means "covariant derivative", which for functions is exactly the partial derivative in local coordinates), then the first thing to do would be to compute $g_{ab,c} = \frac {\partial g_ {ab}} {\partial y_c} = \partial _c \space g_{ab} $. This gives
$$\partial _c g_{ab} = \frac {(\delta _{ac} y_b + \delta_{bc} y_a) (1-|y|^2) + 2 y_a y_b y_c} {(1-|y|^2)^2}$$
so that
$$g_{aj, k} + g_{ak,j} - g_{jk,a} = 2 \frac {y_a} {1-|y|^2} g_{jk} .$$
Now, let us compute the inverse matrix of $(g_{ij}) _{i,j}$. The following computations will be done formally, ignoring convergence issues since in the end the results will turn out to be valid. If we let $M = \Big( \frac {y_i y_j} {1-|y|^2} \Big) _{i,j}$, then we must invert $I + M$, the formal inverse of which is $\sum \limits _{p \ge 0} (-1)^p M^p$. Note that $M^2 = \frac {|y|^2} {1-|y|^2} M$, so that $M^p = \Big( \frac {|y|^2} {(1-|y|^2)} \Big) ^{p-1} M$ for $p \ge 1$. Then, our formal inverse becomes $I + \sum \limits _{p \ge 1} (-1)^p \Big( \frac {|y|^2} {(1-|y|^2)} \Big) ^{p-1} M = I - (1-|y|^2) M$. Remarkably, this inverse is not just a formal inverse, but a true matrix inverse, as you can check by yourself by multiplying with $I+M$. Thus, this gives $g^{ia} = \delta ^{ia} - y^i y^a$.
Plugging the above into our calculations, we get
$$\Gamma ^i _{jk} = \frac 1 2 \sum \limits _a (\delta ^{ia} - y^i y^a) 2 \frac {y_a} {1-|y|^2} g_{jk} = y^i g_{jk} (y) ,$$
so the equation of the geodesics becomes
$$0 = \ddot x ^i (t) + \sum \limits _{j, k} x ^i (t) g_{jk} \big( x(t) \big) \dot x ^j (t) \dot x ^k (t) = \ddot x ^i (t) + | \dot x (t) |^2 x ^i (t) .$$
One final note: the coordinates $x^i (t)$ of $x(t)$ are coordinates on $S^n$, not in $\Bbb R ^{n+1}$! I mean, $x^i (t) = y^i (x(t))$ where $(y^1, \dots , y^n)$ are local coordinates on $S^n$.
PS: The reason why the formal inverse computed above turns out to be a valid matrix inverse is essentially the following: the expression of $(I + M)^{-1}$ is an analytic function of $y^1, \dots, y^n$; it is true that its Taylor series around $0$ is convergent only in some ball around $0$, but its closed-form expression stays valid on a larger domain.