What's the meaning of row vector in terms of physics/intuitively?

71 Views Asked by At

What I mean by this is like for instance, tensors are just being multilinear map by itself. But when it comes to physics, tensor is a type of values that follows a certain transformation law under coordinate transformations, meaning that it's an arrow that does not change its direction no matter on which perspective we're looking at. I don't really get the idea of row vectors intuitively. Is it okay to take them as functions that have column vector as its parameters?

3

There are 3 best solutions below

0
On BEST ANSWER

The vector in its vector space does not care if you write its coordinates in rows or columns. Writing coefficients in a table comes first, I think, from compactly writing down linear systems of equations, and their solution via row manipulations. There it is quite natural to write the coefficients of each single equation in a row. Now that the coefficients of linear functionals are associated with rows, it comes quite natural to put the coefficients of the vectors themselves in columns, and thus you get the basic conventions of matrix calculus. The linear maps and functionals between vector spaces get representations in matrices and operations between them.

The difference between physics and mathematics is that in relation to coordinates, for a mathematician the basis of the vector space is the primary object, and a basis change induces an associated transformation in the coefficients. In physics the coefficient tuples and their transformations are in the foreground. I have sometimes the impression that the idea of a basis or (formal) vector space is seen as unimportant consequence of this formalism.

0
On

As noted in a comment, "row vectors" are not physical objects. They arise in mathematics subject to our choice to organize vector information. There is no reason why there should not be also "diagonal vectors", where instead of writing parallel or perpendicular to our screen, sheet of paper or what have you, (in other words, our coordinate system), we write in diagonals, or in circles. We would then have "diagonal vectors" and "circle vectors". The choices are indeed endless as they are arbitrary.

That said, once you organize the information carried by a linear transformation (between finite vector spaces, say) in a matrix, then the terms "column vectors" and "row vectors" become useful. Say you have an $m\times n$ matrix. It represents a certain linear transformation $T:\mathbb{R}^n\to\mathbb{R}^m$. The "column vectors" lie in the image of the map $T$, i.e., in $\mathbb{R}^m$. The "row vectors" lie in the image of the dual map, $T^*$, i.e. in $(\mathbb{R}^n)^*$. In the very special case when $m=n$, it is not wrong to think of the row vectors as objects lying in the dual space of $\mathbb{R}^n$, which although is identified with $\mathbb{R}^n$ itself, nonetheless allows us to interpret the row vectors as linear functionals whose domain contains the column vectors. So your suggestion makes sense in a certain special case.

0
On

If vectors are understood as tangent vectors in the classical context, linear combinations of directional derivatives in a coordinate system, then a column vector is the object in $\mathbb R^n$, locally by moving a point along a coordinate line.

$$V = \sum_i v^i \mathbb e_i \quad \leftrightarrow \quad \left< \left( \mathbb e_1,\dots , \mathbb e_n \right), \quad \left(\begin{align} &v^1\\& \dots \\& v^n \end{align}\right)\quad \right> $$ with $\left<a,b\right>$ meaning linear combination $a*b$, multiplication per components and changing $\text{List} \to \text{Plus}$.

The basis of the dual space of 1-forms in $\mathbb R^n\to \mathbb R$ are the differentials of the coordinates, that project a vector onto its coordinates as scalars

$$\Omega = \sum_i \mathbb dx^i \omega_k \quad \leftrightarrow \quad \left< \left( \omega_1,\dots ,\omega_n \right), \quad \left(\begin{align} & \mathbb dx^1\\& \dots \\&\mathbb d x^n \end{align}\right)\quad \right> $$

$$\omega = \sum_k \omega_k \mathbb d x^k $$ with $$\mathbb d x: \quad \int_{e_i} \mathbb d x^k = \delta^i_k, \qquad \partial_{x_i} \mathbb dx^k = \delta_i^k $$

Then, by evaluation of a form $\Omega$ on a vector $\Omega(V)$, the inner evaluations of basis forms and basis vectors cancel and yields the contraction of the row form component vector left and vector component column vector by representing linear combination $\left<a,b\right> = a^T b$ by the the szandard matrix map concatenation.

For physicists, perhaps the best exposition of the computational mechanical apparatus and the geometric ideas behind are the introducing chapters of differential geometry in Misner, Thorne, Wheeler "Gravitation": vectors are linearized directed intervals along lines at a point, differentials are linearized hypersurfaces of the coordinate functions, evaluation is the line integral, counting the number of crossings of hypersurfaces.