A question on how a vector can given by a variable

68 Views Asked by At

This isn’t a repeat question as I know the reasons as to why the identification is made; this question is about an adjacent topic: implementation.

In differential geometry, it’s common to identify vectors with differential operators. $\frac{d}{d\lambda}=\frac{dx^i}{d\lambda} \frac{\partial}{\partial x^i}=v^i e_i$. Lambda is said to parametrize a curve $\gamma: I \to M$ and the $\frac{d}{d\lambda}$ is a vector field tangent to the curve. My question here is how the curve can be ascertained by its parameter. After all, the same variable can parametrize multiple curves. Given a smooth function $f: M \to \mathbb{R}$, its directional derivative is supposed to be $df/d\lambda$, but the chain rule expansion for it turns the $x^i$ coordinate variables into functions of $\lambda$ for it to work out. How would this identification of variable and function work out?

With this in mind, given a curve, how would you find a vector tangent to a curve $\gamma$? I know you could use $\gamma’\cdot \nabla$, but I was thinking of a method that uses the chain rule expansion or explains why it’s used.

My confusions can be summarized as:

  1. A parameter isn’t enough to know the curve and it’s tangent vector.

  2. When using the chain rule, the independent variables are treated as dependent on a path parameter.

1

There are 1 best solutions below

0
On BEST ANSWER

I don't quite understand your question but maybe this will be an answer to it. In this context "curve" means "parameterized curve" by default; that is, $\gamma$ is a smooth function from, say, $\mathbb{R}$ (or an open interval, or whatever) to our manifold $M$. At any point $t_0 \in \mathbb{R}$ this function has a differential

$$d \gamma_{t_0} : T_{t_0}(\mathbb{R}) \to T_{\gamma(t_0)}(M)$$

sending tangent vectors on $\mathbb{R}$ to tangent vectors on $M$. Now, $\mathbb{R}$ itself has the special property that each of its tangent spaces can be canonically identified with $\mathbb{R}$ itself (via translation); this is why we never needed to think about tangent bundles and so forth when we were just doing calculus on $\mathbb{R}$ itself. So the differential is a function

$$d \gamma_{t_0} : \mathbb{R} \to T_{\gamma(t_0)}(M)$$

which, by linearity, can be identified with the image of $1$, which is just some tangent vector $d \gamma_{t_0}(1) \in T_{\gamma(t_0)}$. This collection of tangent vectors is our vector field tangent to the curve.

If $f$ is a smooth function $M \to \mathbb{R}$ (or more generally a smooth function on an open) then we can understand how quickly its value changes as we trace the curve $\gamma$ by consider the composition $f \circ \gamma : \mathbb{R} \to \mathbb{R}$, which is just an ordinary $1$-variable function, and hence whihc has a derivative in the ordinary $1$-variable sense, which we might call $\frac{d(f \circ \gamma)}{d \lambda}$ (being careful to actually insert the function $\gamma$ which matters a lot here) where $\lambda$ is just a name for the copy of $\mathbb{R}$ that parameterizes our curve.

By the chain rule this derivative at a point $t_0$ is the composite $df_{\gamma(t_0)} \circ d \gamma_{t_0}$; here $d \gamma_{t_0}$ is a vector and $df_{\gamma_0}$ is a covector and they pair to form just a number as expected.

Finally, given some choice $x^i$ of local coordinates $df_{\gamma(t_0)}$ can be written as a linear combination of the covectors $dx^i_{\gamma(t_0)}$. This collection of covectors has a dual basis of vectors which we can write as $\frac{\partial}{\partial x^i}$, and then we can write $d \gamma_{t_0}$ as a linear combination of these. The pairing between the basis and the dual basis is then how we compute the directional derivative in this case. But to my mind it's cleaner to work without coordinates first to see what is going on invariantly, and e.g. to avoid getting confused about the distinction between vectors and covectors.