I´m being given a linear transformation, for which I can find the standard basis in the domain and codomain; but then, the book ask to find the associated matrix related to a new basis for the domain... then the same for a new basis in the codomain... then for both together. There´s an example showing the mechanics but I can´t get why it´s the way it is; I just see they multiply or pre-multiply the matrix in the standard basis with the new basis. Could you please help me understand why this solves the question?
Edit:
Example:
Given $T: \mathbb{R}^2 \to\mathbb{R}^3$ defined by the following matrix in standard basis:
$$T(\mathbf{x})=\begin{bmatrix}1&1\\-1&0\\0&0\end{bmatrix}\mathbf{x}$$
Find the associated matrix being the domain basis: $\{(1,3)^\intercal,(-2,4)^\intercal\}$. Same but being the codomain basis: $\{(1,1,1)^\intercal,(2,2,0)^\intercal,(3,0,0)^\intercal\}$. Then, both together.
Thanks!!
The General Matrix Representation Theorem:
Let $T:V\to W$ be a linear map from a n-dimensional vector space V to an m-dimensional vector space W, and let $B_W = \{\vec w_1,...,\vec w_m\}$ be an ordered basis for W. Then, there is a unique m × n matrix A such that $[T(\vec v)]_{B_W} = A[\vec v]_{B_V}.$
From this, we have an algorithm for constructing a matrix representation for a linear map: