Suppose we are given a vector field $\vec{a}$ such that
$$\vec{a}(x_1,\ldots,x_n)=\sum_{i=1}^{k}f_i(x_1,\ldots,x_n)\vec{e_i} $$
where
$$\mathbf{S}=\{\vec{e_1},\ldots,\vec{e_k}\}$$ is some constant, orthonormal basis of $\Bbb{R}^k$.
What follows is to be taken with a cellar of salt. To compute the directional derivative, we start with the gradient. Its components are given by the matrix $\mathbf{G}$:
$$\mathbf{G}=\begin{bmatrix}\frac{\partial f_1(x_1,\ldots,x_n)}{\partial x_1} & \cdots &\frac{\partial f_1(x_1,\ldots,x_n)}{\partial x_n}\\ \vdots & \ddots & \vdots\\\frac{\partial f_k(x_1,\ldots,x_n)}{\partial x_1}&\cdots&\frac{\partial f_k(x_1,\ldots,x_n)}{\partial x_n}\end{bmatrix}.$$
The gradient $\vec{\nabla}\vec{a}$ itself is given by the double sum
$$\vec{\nabla}\vec{a}=\sum_{i=1}^{k}\sum_{j=1}^{n}\frac{\partial f_i(x_1,\ldots,x_n)}{\partial x_j}\vec{e_i}\otimes\vec{e_j}.$$ When dealing with scalar-valued functions, the derivative in the direction of some vector $\vec{u}$ would be the projection of the gradient onto $\vec{u}$.
Assuming this still holds, the directional derivative $\mathrm{D}_{\vec{u}}(\vec{a})$ of $\vec{a}$ is
$$\mathrm{D}_{\vec{u}}(\vec{a})=\vec{\nabla}\vec{a}\cdot\frac{\vec{u}}{|\vec{u}|}.$$
Substituting in our double sum:
$$\mathrm{D}_{\vec{u}}(\vec{a})=\frac{\vec{u}}{|\vec{u}|}\sum_{i=1}^{k}\sum_{j=1}^{n}\frac{\partial f_i(x_1,\ldots,x_n)}{\partial x_j}\vec{e_i}\otimes\vec{e_j}.$$
Question: Is this generalisation for $\mathrm{D}_{\vec{u}}(\vec{a})$ true?
- If so, how does one evaluate it?
- If not, what is the proper way to find a directional derivative of a vector field?
Appendix
The sign $\otimes$ denotes the tensor product. Here, we have the tensor product of basis vectors.
Furthermore, following dyadics on Wikipidia, it seems for an orthonormal basis $$\mathrm{D}_{\vec{u}}(\vec{a})=\frac{\vec{u}}{|\vec{u}|}\mathbf{G}.$$ So if $\vec{u}=\vec{e_m}$, then $$\mathrm{D}_{\vec{e_m}}(\vec{a})=\vec{e_m}\mathbf{G}.$$ This makes no sense, unless it is some kind of tensor contraction... In such a case, $$\mathrm{D}_{\vec{e_m}}(\vec{a})=\begin{bmatrix}\sum_{i=1}^{k}e_iG_{i1}\\ \vdots \\ \sum_{i=1}^{k}e_iG_{in}\end{bmatrix}.$$
Here $e_i$ denotes the $i^{th}$ component of $\vec{e_m}$; $G_{ij}$ denotes the $ij^{th}$ component of $\mathbf{G}$. And since we are in an orthonormal basis, only $e_m=1\neq0$:
$$\mathrm{D}_{\vec{e_m}}(\vec{a})=\begin{bmatrix}e_mG_{m1}\\ \vdots \\ e_mG_{mn}\end{bmatrix}=\begin{bmatrix}G_{m1}\\ \vdots \\ G_{mn}\end{bmatrix}.$$
This seems to be the $m^{th}$ row of $\mathbf{G}$ transposed. And in derivative form,
To generalize, let's first go back a little and talk about the directional derivative of a scalar-valued function $f(\vec{x})$ of a vector variable $\vec{x}$ in a general and invariant language. If $\vec{d}$ is a direction vector (unit length), then the directional derivative of $f$ at $\vec{x} = \vec{x}_{0}$ in the direction $\vec{d}$ can be defined as follows:
It is the image of the linear transformation ${df \over d\vec{x}}( \vec{x}_{0})$ acting on the vector $\vec{d}$.
Thus, the generalization consists in replacing the scalar funtion $f$ by a vector-valued one, $\vec{f}$, and writing down the invariant definition of the derivative $$ {d\vec{f} \over d\vec{x}}( \vec{x}_{0}). $$ This derivative is, by definition, a certain linear transformation from (the tangent space at $\vec{x}_{0}$ of the domain of $\vec{f}$) to (the tangent space at $\vec{f}(\vec{x}_{0})$ of the range of $\vec{f}$).
The specific defining properties of this linear transformation can (and should be at first) stated without resorting to bases or tensor representations, and are described on page 66 of this book: https://books.google.com/books?id=JUoyqlW7PZgC&printsec=frontcover&dq=arnold+ordinary+differential+equations&hl=en&sa=X&ved=0ahUKEwjGv_y44OfPAhXDSSYKHXvZCC4Q6AEIHjAA#v=onepage&q=The%20action%20of%20diffeomorphisms&f=false