Recently I have been learning about the duals of Vectors spaces, spaces of co-vectors that map from the vector space to its underlying field. For simplicity I will denote the vector spaces as $V, W$ its dual as $V^*, W^*$ and the underlying field as $F$, this means that elements of the dual space are functions: $x\in V^* \implies x: V\to F$
Now what I realized recently when thinking about just this concept is we already functions that exhibit this property: Row Vectors. Multiplication of Column and row vectors the "usual way" which can be thought of as a function application is simply the regular dot product of two vectors. Which is trivially a multi-linear map just like co-vectors.
$\begin{bmatrix}x_1 &x_2& \dots \end{bmatrix}*\begin{bmatrix}x_3 \\x_4\\ \vdots \end{bmatrix}=x_1*x_3+x_2*x_4+\dots$
Another property of Co-Vectors is the "Pullback" function between co-vector spaces and provides a linear way to move between co-vector spaces. We already have linear mapping between co-vector spaces the matrix, which exhibits the same properties as the pullback. A basic differential form identity is that for a function $f:V\to W$ its pullback is defined $f^*: W^* \to V^*$ Then lets denote $w^*\in W^*\ \ \ v^* \in V, etc$:
$f^*(v^*)=w^* \implies vf = w$
If we look at the row vectors in $V^t$ and matrices we see essentially the same result.
By definition of the transpose of a matrix A: $A:V\to W \implies A^t: W^t \to V^t$ $v^tA^t=w^t \implies Av = w$
Another fact about co-vectors is that they are made up of bases which row-vectors trivially are. So with these facts laid out here what is the real difference between co-vectors and row-vectors. Also assuming row-vectors are co-vectors then why don't textbooks teach them that way (at least at first).
Not all vectors are tuples of scalars. As you’ve written in your question, covectors are linear functionals. Now, any finite-dimensional vector space over $F$ is isomorphic to $F^n$, so you can certainly identify elements of $V$ with elements of $F^n$, and with a suitable choice of bases for $V$ and $V^*$ applying a covector to a vector becomes matrix multiplication of their coordinate representations, but that’s not at all the same thing as saying that these objects are row or column vectors. This distinction becomes even more important when dealing with infinite-dimensional spaces.
The “suitable choice of bases” above is an important detail. If $\mathbf v\in V$ and $\mathbf\alpha\in V^*$ then for $\mathbf\alpha(\mathbf v)$ to equal $[\mathbf \alpha]_{\mathcal B^*}^T[\mathbf v]_{\mathcal B}$, the bases $\mathcal B$ and $\mathcal B^*$ must be dual, that is, their respective elements must satisfy $\mathcal{\varepsilon}_i(\mathbf e_j) = \delta_{ij}.$ This is obviously true for the standard bases of $F^n$, but if you use other bases, $\mathbf\alpha(\mathbf v)$ might not be a simple dot product of their representations. You’re onto an important idea here, though: the Riesz representation theorem connects applying a covector to a vector with the inner product on the vector space, and every inner product on $F^n$ is, in a suitable basis, just the dot product of the coordinate vectors.