As opposed to row vectors? It would seem that whenever performing operations on vectors in space (applying a matrix/linear transformation to it, for example) does not work unless the vector is in it's column form (since lots of things, such as matrix multiplication, are dependent on dimensionality). Why is it that things work with column vectors but not row vectors?
2026-04-03 21:07:09.1775250429
Is there a reason vectors in space are represented as column vectors (in that nothing works with row vectors)?
167 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in LINEAR-ALGEBRA
- An underdetermined system derived for rotated coordinate system
- How to prove the following equality with matrix norm?
- Alternate basis for a subspace of $\mathcal P_3(\mathbb R)$?
- Why the derivative of $T(\gamma(s))$ is $T$ if this composition is not a linear transformation?
- Why is necessary ask $F$ to be infinite in order to obtain: $ f(v)=0$ for all $ f\in V^* \implies v=0 $
- I don't understand this $\left(\left[T\right]^B_C\right)^{-1}=\left[T^{-1}\right]^C_B$
- Summation in subsets
- $C=AB-BA$. If $CA=AC$, then $C$ is not invertible.
- Basis of span in $R^4$
- Prove if A is regular skew symmetric, I+A is regular (with obstacles)
Related Questions in VECTORS
- Proof that $\left(\vec a \times \vec b \right) \times \vec a = 0$ using index notation.
- Constrain coordinates of a point into a circle
- Why is the derivative of a vector in polar form the cross product?
- Why does AB+BC=AC when adding vectors?
- Prove if the following vectors are orthonormal set
- Stokes theorem integral, normal vector confusion
- Finding a unit vector that gives the maximum directional derivative of a vector field
- Given two non-diagonal points of a square, find the other 2 in closed form
- $dr$ in polar co-ordinates
- How to find reflection of $(a,b)$ along $y=x, y = -x$
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
It seems that you consider the vector space $K^m$ of all $m$-tuples1 $\mathbf x = (x_1,\dots, x_m)$ with coordinates $x_i$ in a field $K$, for example $K = \mathbb R$.
One can very well argue that it is just a notational issue (i.e. a mere convention) how to write the elements $\mathbf x \in K^m$. The only requirement is that we can uniquely identify the coordinates of each vector. Expressed more formally, the chosen notation must come along with well-defined coordinate projections $p^m_i : K^m \to K$.
Some standard representation variants are
The difference between tupel and row vector representation seems perhaps to be somewhat artifical since we only changed the separating character between the elements $x_i$ (comma / blank space), but we should be aware that in both row and column vector representation $\mathbf x$ is written as a matrix2.
As long as we only consider vectors in $K^m$, the choice of a representation is absolutely irrelevant. We can add vectors and take scalar multiples in the obvoius way. However, the concrete representation variant becomes important if we consider linear maps $f : K^m \to K^n$. The point is that such a linear map is usually expressed in form of a matrix obtained by assembling the images $f(\mathbf e^m_i) \in K^n$ of the standard basis vectors3 $\mathbf e^m_i \in K^m$. This can be done based on row and on column vector representation.
Clearly these matrices are transposed, i.e. $(\mathbf M_{row}(f))^T = \mathbf M_{col}(f)$. It is well-known that the image vectors $f(\mathbf x)$ can now be computed via matrix multiplication 4. We have
These two equations explain why the column vector form is the most popular variant when working with matrix representations of vectors and linear maps: In $(2)$ the vector $\mathbf x$ occurs right of the operator on both sides of the equation, in $(1)$ the order is reversed.
To distinguish visibly between $\mathbf x \in K^m$ and its notational variants in row and column vector representation I suggest to write $\mathbf x_{row} \in K(1,m)$ and $\mathbf x_{col} \in K(m,1)$. Doing so we get the two unambiguous formulas \begin{equation} f(\mathbf x)_{row} = \mathbf x_{row} \cdot \mathbf M_{row}(f) \tag{3}. \end{equation} \begin{equation} f(\mathbf x)_{col} = \mathbf M_{col}(f) \cdot \mathbf x_{col} . \tag{4} \end{equation} Note that $\mathbf x_{row}^T = \mathbf x_{col}$.
Whatever our preference is, we have to make a fundamental choice either for the row or the column vector form. This representation convention must then be strictly applied to all vectors of $K^m$ and all linear maps $f : K^m \to K^n$; we never decide case-by-case.
To emphasize it, both possible representation conventions work perfectly.
In the sequel we focus on the column vector representation and simply write $\mathbf M(f) = \mathbf M_{col}(f)$. The column vectors of $\mathbf M(f)$ are the images $f(\mathbf e^m_i)$ of the standard basis vectors of $K^m$, i.e. are regarded as ordinary vectors in $K^n$. What about the row vectors $(\mathbf M(f))_{(i)}\in K(1,m)$ of $\mathbf M(f)$? Clearly $(\mathbf M(f))_{(i)}$ is the matrix representation of the $i$-th coordinate function $f_i = p^m_i \circ f : K^m \to K$. Therefore the row vectors must not be regarded as ordinary vectors of $K^m$ written in row vector form, but as matrix representations of linear funtionals on $K^m$. Note that these are elements of the dual space $(K^m)^*$ of $K^m$. Thus our representation convention gives a simple rule of thumb:
This seems to be very clear, but indeed we get two different, not to say inconsistent, interpretations of column vectors: On the one hand they stand for elements of $K^n$, on the other hand for linear maps $K \to K^n$. The correspondence between elements of $K^n$ and linear maps $K \to K^n$ is obvious: $\mathbf x \in K^n$ is identified with the linear map $f_{\mathbf x} : K \to K^n, f_{\mathbf x}(a) = a\mathbf x$. In fact, $\mathbf M({f_{\mathbf x}}) = f_{\mathbf x}(e^1_1) = f_{\mathbf x}(1) = \mathbf x$ in column vector representation.
So we have a "hybrid interpretation" of column vectors (and analogically of row vectors if we decide to adopt column vector representation). We must therefore admit that our above three vector representation variants are not only notational issues of complete arbitrariness; it requires a context-related explanation what a row or column vector stands for.
Footnotes
Formally we can use the definition $K^m =$ set of all functions $\mathbf x : \{1,\dots,m\} \to K$. Then $x_i = \mathbf x(i)$.
The vector space of $p \times q$ matrices with entries in $K$ is denoeted by $K(p,q)$. Formally we can use the definition $K(p,q) =$ set of all functions $\mathbf A : \{1,\dots,p\} \times \{1,\dots,q\} \to K$. With $a_{ij} = \mathbf A(i,j)$ one usually writes $\mathbf A = (a_{ij})$. The $i$-th row of $\mathbf A$ will be denoted by $\mathbf A_{(i)} \in K(1,q)$ (lower index) and the $j$-th column by $\mathbf A^{(j)} \in K(p,1)$ (upper index).
The standard ordered basis of $K^m$ is the $m$-tupel of vectors $\mathcal E^m = (\mathbf e^m_1,\dots, \mathbf e^m_m)$ with $\mathbf e^m_i = (\delta_{i1},\dots,\delta_{im}) \in K^m$, where $\delta_{ij} = 1$ for $i = j$ and $\delta_{ij} = 0$ for $i \ne j$. For $\mathbf x \in K^m$ with coordinates $x_i$ we have $\mathbf x = \sum_{i=1}^m x_i \mathbf e^m_i$.
Let us recall how matrix multiplication $\mu : K(p,q) \times K(q,r) \to K(p,r)$ works. First observe that for a row vector $\mathbf v \in K(1,q)$ with coordinates $v_j$ and a column vector $\mathbf w \in K(q,1)$ with coordinates $w_j$ we take $\mathbf v \cdot \mathbf w = \sum_{j=1}^q v_jw_j \in K$. Now let $\mathbf A = (a_{ij}) \in K(p,q)$ and $\mathbf B = (b_{jk}) \in K(p,q)$. Then $\mathbf A$ consists of $p$ row vectors $\mathbf A_{(i)} \in K(1,q)$ and $\mathbf B$ consists of $r$ column vectors $\mathbf B_{(k)} \in K(q,1)$. The matrix $\mathbf C = \mathbf A \cdot \mathbf B$ has then the entry $\mathbf A_{(i)} \cdot \mathbf B^{(k)}$ at position $(i,k)$. Thus we get the well-known representation $\mathbf C = (c_{ik})$ with $c_{ik} = \sum_{j=1}^q a_{ij}b_{jk}$. Clearly, the $i$-th row of $\mathbf A \cdot \mathbf B$ is $\mathbf A_{(i)} \cdot \mathbf B$ and the $k$-th column is $\mathbf A \cdot \mathbf B^{(k)}$. Moreover, for a row vector $\mathbf a \in K(1,q)$ with coordinates $a_j$ we get $\mathbf a \cdot \mathbf B = \sum_{j=1}^q a_j \mathbf B_{(j)}$ and for a column vector $\mathbf b \in K(q,1)$ with coordinates $b_j$ we get $\mathbf A \cdot \mathbf b = \sum_{j=1}^q b_j \mathbf A^{(j)}$
We have $\mathbf M_{col}(f) \cdot \mathbf x = \sum_{j=1}^m x_j (\mathbf M_{col}(f))^{(j)} = \sum_{j=1}^m x_j f(\mathbf e^m_j) = f(\sum_{j=1}^m x_j \mathbf e^m_j) = f(\mathbf x).$ Formula $(1)$ is obtained similarly.