The basis of a matrix representation

2.4k Views Asked by At

If I have the linear map $f:\Bbb{R}^n\rightarrow \Bbb{R}^m$ then we can write $f$ as like the following: $$f\left(\vec x\right)=A\vec x$$ Where $A$ is a matrix. I think $A$ is called the standard matrix for $f$. Linear maps act on vectors and therefore should not be associated with any basis i.e. they act on vectors rather then 'coordinate vectors'. Does this mean that the matrix $A$ is not associated with any basis? (noting that in the standard basis of the two vector spaces, the matrix representatin of $f$ will be equivlent to $A$).

i.e. is the following statement correct:

The matrix $A$ is equivalent to the linear map $f$ when acting on a vector in $\Bbb{R}^n$. The matrix $\tilde A$ which is the matrix representation of $f$ in the standard bases of $\Bbb{R}^n$ and $\Bbb{R}^m$ has exactly the same components as $A$ but acts on coordinate vectors rather then actual vectors the linear map $f$ acts on. These coordinate vectors will however take exactly the same form, in the standard bases, as the original vectors that $f$ acts on.

3

There are 3 best solutions below

15
On BEST ANSWER

I've never seen the notation $\tilde A$ used to mean $A$ w.r.t. the standard basis, but $A$ is ALWAYS w.r.t. some basis. Think about it, matrices have components. What would those components be if the matrix were not w.r.t. to some basis?

So $f$ is basis-free -- it doesn't matter which basis you choose, $f$ will always be the linear map that does a specific thing (determined by its definition is).

$A$ is basis-dependent. You can only specify a matrix representation of a transformation $f$ if you've already chosen a basis. And of course, the same matrix will NOT work if you later decide to change your basis (though you can transform it with an invertible matrix $P$ like $P^{-1}AP$).

$\tilde A$ is apparently the matrix representation of $f$ w.r.t. the standard basis. This is of course, basis-dependent.

$\vec x$ is an object just like $f$. By that I mean it is intrinsicly basis-free. The coordinates of $\vec x$ are determined after a basis is chosen. But we don't usually use any special notation to specify whether $\vec x$ is a coordinate vector or an abstract vector UNLESS we're doing a change of basis problem.

2
On

For simplicity assume that $n = m$.

First some general things. Let $U$ and $V$ be two real vectors spaces with dimension $n$. A linear transformation $T: U \to V$ is a map that satisfies that

$$T(\alpha u + v) = \alpha T(u) + T(v)$$ for all $\alpha \in \mathbb{R}$ and all $u,v\in \mathbb{R}^n$.

Now, since $U$ and $V$ are real vector spaces of dimension $n$ you have isomorphisms $\phi: U\simeq \mathbb{R}^n$ and $\psi: V\simeq \mathbb{R}^n$. It is really all about these isomorphisms.

So you end up with a map $S: \mathbb{R}^n \to\mathbb{R}^n$ such that the diagram $$ \require{AMScd} \begin{CD} U @>{T}>> V\\ @VVV @VVV \\ \mathbb{R}^n @>{S}>> \mathbb{R}^n \end{CD} $$ commutes.

Here the map $S$ will depend on the isomorphisms $U\simeq \mathbb{R}^n$ and $V\simeq \mathbb{R}^n$. $S$ is given by a matrix $A$ such that $S(v) = Av$ for $v\in \mathbb{R}^n$. The relation is that if $\{e_i\}$ is the standard basis for $\mathbb{R}^n$ then you get bases $\{f_i\}$ for $U$ and $\{g_i\}$ for $V$ such that $\phi^{-1}(e_i) = f_i$ and $\psi^{-1}(e_i) = g_i$. You can obviously also pick other bases. However, if you go with this, then you have that

$$ T(f_i) = \psi^{-1}(S\phi(f_i)) = \psi^{-1}(Se_i). $$ And here $Se_i$ is exactly the $i$th column of the matrix $A$.

0
On

You are right that linear maps act on vectors, not on their coordinates. But in $\Bbb R^n$ their is no distinction between a vector and its coordinates in the standard basis: in general given any vector $v$ in a real vector space $V$ equipped with a basis $B$, the tuple of coordinates of $v$ with respect to $B$ is an element of$~\Bbb R^n$; but in the special case that $V=\Bbb R^n$ and $B$ is the standard basis, that element is just $v$ itself. This is what the standard basis is about: if you want to think of $v\in\Bbb R^n$ as holding the coordinates of some vector, but you have no abstract situation at hand, then you can still think of it as being the coordinates of itself with respect to the standard basis.

It is somewhat unfortunate though that the same word "vector" is used sometimes to denote a general element of an abstract vector space, and sometimes (as in "column vector") to denote an $n$-tuple of numbers, an element specifically of$~\Bbb R^n$.

Some people would make a distinction between $\Bbb R^n$ and the set of column vectors of size $n$. I don't think such a distinction is very useful; writing the elements of an $n$-tuple vertically is just a notational convention. In any case the two vector spaces are canonically isomorphic (meaning that one can point out a specific isomorphism without making any choices).

Similarly the vector space of $m\times n$ matrices with real entries is canonically isomorphic to the space $\mathcal L(\Bbb R^n,\Bbb R^m)$ of $\Bbb R$-linear maps $\Bbb R^n\to\Bbb R^m$. The canonical isomorphism is given by $A\mapsto(v\mapsto A\cdot v)$ (matrix$\times$column-vector multiplication is defined independently of any choices of bases). Here one cannot say that it is just a notational matter, since one cannot construe matrices (which are just blocks of numbers) to be linear maps, but the correspondence is canonical nonetheless.

So to answer your question, you can distinguish $A$ from the linear map $f:\Bbb R^n\to\Bbb R^m$ that is corresponds to. But $A$ is really the same as the matrix $\tilde A$ of the linear map with respect to the standard bases of $\Bbb R^n$ and $\Bbb R^m$. As you said those matrices have the same size and the same entries, and in the world of matrices that makes them equal, period. And no choice of basis is involved here: standard bases (of spaces of the form $\Bbb R^n$, which are the only ones to have them) are always given, never chosen.