under what conditions can orthogonal vector fields make curvilinear coordinate system?

1.3k Views Asked by At

I am considering n-dimensional Euclidean space $\mathbb{R}^n$. For any $x\in\mathbb{R}^n$, $v_1(x), \cdots, v_n(x)$ are orthogonal vectors. As functions of $x$, $v_i$'s are differentiable and non-zero everywhere. For $i=1,\cdots,n$, let $\gamma_i(t_i)$ be integral curves driven by $v_i(x)$, i.e. $$\frac{d\gamma_i(t_i)}{dt_i}=v_i(\gamma_i(t_i))).$$ The question is, can $v_i$ always be "properly scaled" such that $\gamma_i$'s can define a curvilinear coordinate system, i.e., any $x\in\mathbb{R}^n$ can be expressed as $x(t_1,\cdots, t_n)$. If not, then under what conditions for $v_i$'s can this be realized?

An example I have in mind for $v_i$'s is the eigenvectors of the Hessian of a smooth function $f:\mathbb{R}^n\rightarrow \mathbb{R}$. How can one decide the magnitude of the eigenvectors to make a curvilinear coordinate system? The question asked above is more general than this example.

2

There are 2 best solutions below

0
On

Ignoring the issue of possibly scaling the $\nu_i$ for the time being, you are essentially asking when a set of $n$ linearly-independent vector fields $\nu_i$ form the $\frac{\partial}{\partial x^i}$'s of some coordinate system $x^i$. A necessary condition is that the $\nu_i$ pairwise commute, i.e. for each $i,j$

$$[\nu_i,\nu_j]=0.$$

This condition is also locally sufficient. If the $\nu_i$ commute, then their flow maps $F_t^i$ (which might only exist locally) also commute. We can use this property to construct a special coordinate system on $\mathbb{R}^n$ such that $\nu_i=\frac{\partial}{\partial x^i}$. This works as follows. Given an $x_o\in\mathbb{R}^n$ there is a mapping

$$\phi:(t_1,...,t_n)\mapsto F^1_{t_1}\circ F^2_{t_2}\circ\dots\circ F^n_{t_n}(x_o)$$

that is a diffeomorphism from a neighborhood of $0$ in ``$t$-space'' to a neighborhood of $x_o$ in $\mathbb{R}^n$ (that the map is a diffeomorphism follows from an application of the inverse function theorem). Now let's calculate the $\frac{\partial}{\partial x^i}$. By definition,

$$\frac{\partial}{\partial x^i}=\phi_*\frac{\partial}{\partial t_i}.$$

When $i=1$, the definition of the flow map gives

$$\frac{\partial}{\partial x^1}=\nu_1.$$

When $i=2$

$$\frac{\partial}{\partial x^2}(x)=(F^1_{t_1})_*\nu_2(x)=\nu_2(x),$$

because $F^1_{t_1}$ commutes with $F^2_{t_2}$. By a very similar argument, for all $i$ we have $\frac{\partial}{\partial x^i}=\nu_i$.

5
On

We first recall the following general result:

Given a frame $({\bf w}_a)$ (that is, $n$ (pointwise) linearly independent vector fields ${\bf w}_1, \ldots, {\bf w}_n$) on an open subset $U \subseteq \Bbb R^n$ (or indeed, on any differentiable manifold), the following are equivalent:

  1. For any $p \in \Bbb R^n$ there are local coordinates $(u^a)$ on some open set containing $p$ for which $$\frac{\partial}{\partial u^a} = {\bf w}_a, \quad a = 1, \ldots, n.$$
  2. All of the Lie brackets $[{\bf w}_a, {\bf w}_b]$ of the frame are identically zero; in this case we say that the vector fields ${\bf w}_a$ commute (pairwise).

(This can be found, e.g., as Theorem 18.6 in Lee's Introduction to Smooth Manifolds---unfortunately the relevant page is not previewable with Google Books.)

If we decompose ${\bf w}_a = \sum_i {\bf w}_a^i \frac{\partial}{\partial x^i}$, then the Lie bracket is given by

$$[{\bf w}_a, {\bf w}_b] = \sum_{i, j} \left({\bf w}_a^j \frac{\partial}{\partial_j} {\bf w}_b^i - {\bf w}_b^j \frac{\partial}{\partial_j} {\bf w}_a^i\right) \frac{\partial}{\partial_i}.$$

Now, we want to know when we can scale the vector fields ${\bf v}_a$ in a given frame respectively by smooth, nonvanishing functions $f_a$ so that the vector fields $f_a {\bf v}_a$ are the coordinate vector fields for some coordinates $(u_a)$, and by the above result this is the case (at least locally) iff there are smooth, nonvanishing functions $f_a$ such that $$[f_a {\bf v}_a, f_b {\bf v}_b] = 0$$ for all $a, b$. (In fact, by antisymmetry of the Lie bracket we need only check this for $a < b$.) Substituting ${\bf w}_a = f_a {\bf v}_a$ in the above formula for the Lie bracket gives that this is equivalent to the existence of a (local) solution to the system $$\color{#3f3fff}{\sum_{j} \left[f_a {\bf v}_a^j \frac{\partial}{\partial_j} (f_b {\bf v}_b^i) - f_b {\bf v}_b^j \frac{\partial}{\partial_j} (f_a {\bf v}_a^i)\right] = 0, \quad a < b, \quad i = 1, \ldots, n \qquad (\ast)}$$ of $\frac{1}{2} n^2 (n - 1)$ partial differential equations in the $n$ functions $f_a$.

(None of these considerations [and in particular this system] depend on the metric outright, but we can simplify this system a little using the orthogonality condition, which in the above notation we can write as the system $$\sum_i {\bf v}_a^i {\bf v}_b^i = 0, \quad a < b$$ of $\frac{1}{2} n (n - 1)$ algebraic equations. Differentiating with respect to $j$ and rearranging gives $$\sum_i {\bf v}_a^i \frac{\partial}{\partial_j} {\bf v}_b^i = -\sum_i {\bf v}_b^i \frac{\partial}{\partial_j} {\bf v}_a^i,$$ and if we expand $(\ast)$ using the product rule, we can use this to combine two of the four resulting terms in the summand.)

We can simplify the system in another way, too, namely, using the fact (closely related to the result at the beginning of this answer) that given a single nonvanishing vector field $\bf a$, at least locally we can always pick coordinates $(t^a)$ such that $\frac{\partial}{\partial t^1} = {\bf a}$. By definition in these coordinates we have ${\bf a}^1 = 1$ and ${\bf a}^2 = \cdots = {\bf a}^n = 0$.

For example, in the case $n = 2$, if we denote our coordinates adapted in the above way by $(x, y)$ and our given frame by $({\bf a}, {\bf b})$, the system of differential equations in the scaling functions, say, $f(x, y), g(x, y)$ simplifies to \begin{align} 0 &= g {\bf b}^1 f_x + g {\bf b}^2 f_y - f g_x {\bf b}^1 - f {\bf b}^1_x g \\ 0 &= f \left( g_x {\bf b}^2 + g {\bf b}^2_x \right) , \end{align} where, as usual, for readability arguments of functions are suppressed and subscripts denote differentiation w.r.t. the given variable.

It's easy to integrate the second equation, giving $$g(x, y) = \frac{H(y)}{{\bf b}^2(x, y)}$$ for any nonzero function $H$, which reduces the problem to substituting and solving the first P.D.E. I do not see at a glance whether this system always has a solution (or whether the corresponding solution for general $n$ does). Perhaps someone else can see this?