I'm having problems understanding section 7.2 of FB's Linear Algebra, 3rd edition, and I can't find the solution online since no specific name is given to the matrices.
Sorry for the long explanation, but like I said I have no idea what the matrices are called.
I think I actually more or less figured it out while typing this, but I'm still completely sure, and it'd be a waste just to not post after all that work. I would also still like to know if the $R_{B,B'}$ matrices have a name.
They talk about a matrix $R_{B,B'}$, which satisfies $T(\vec v)=R_{B,B'}\vec v_b$ for all $\vec v$ in V.
It is the purpose of this section to study the effect that choosing different bases for coordinatization has on the matrix representations of a linear transformation. For simplicity, we shall derive our results in tenns of the vector spaces R^n. They can then be carried over to other finite-dimension al vector spaces using coordinatization isomorphisms.
They then give an example in whic a the differentiation transformation is projected on the reverse ordered P4: $B=(x^4,x^3,x^2,x,1)$, and because here $kx^{k-1}$ $$R_{B,B'}=\begin{bmatrix}0&0&0&0&0\\4&0&0&0&0\\0&3&0&0&0\\0&0&2&0&0\\0&0&0&1&0\\\end{bmatrix}$$
All pretty straight forward so far.
Then they introduce theorem 7.1:
Let T be a linear transfonnation of a finite-dimensional vector space V into itself, and let B and B' be ordered bases of V. Let RB and RB',.. be the matrix representations of T relative to B and B', respectively. Then $$R_{B'}=C^{-1}R_BC$$ where $C=C_{B',B}$ is the change-of-coordinates matrix from B' to B. Consequently, RB' and RB are similar matrices.
So I'm not really sure what the different RB and RB' here represent, so I check their examples to find out:
First a linear transformation $T:\mathbb{R}^3 \rightarrow \mathbb{R}^3, T(x_1,x_2,x_3)=(x_1+x_2+x_3,x_1+x_2,x_3$) $$B=([1,1,0],[1,0,1],[0,1,1])$$
They find RB by putting B in column vector form, and augment the matrix with T(bn).
$$\left[\begin{array}{ccc|ccc}1&1&0&2&2&2\\1&0&1&2&1&1\\0&1&1&0&1&1\end{array}\right]$$ They row reduce and say the right part is RB, fine, and the go on to find C.
Now the confounding part. They finally use two different bases in the next example with polynomial spaces.
$$T:P_2 \rightarrow P_2 | T(p(x))=p(x-1)$$ Consider two ordered bases $B=(x^2,x,1)$ and $B'=(x^2,x+1,x^2-x)$
They immediately write down: $$T(x^2)=(x-1)^2=x^2-2x+1, t(x)=x-1, t(1)=1$$ $$R_B=\begin{bmatrix}1&0&0\\-2&1&0\\1&-1&1\end{bmatrix}$$ As this is with respect to a basis of E3 of polynomials, row reduction is unnecessary. But why is the result of the transformation written vertically in the third example, but horizontally in the second?
I'll summarise some comments into an answer, since they seem to have been helpful. Given a linear map $T\colon V\to W$, and bases $B=\{v_1,\dotsc,v_n\}$ and $B'=\{w_1,\dotsc,w_m\}$ of $V$ and $W$, the matrix $R_{B,B'}$ of $T$ with respect to these bases is the $m\times n$ matrix with $(i,j)$-th entry $\lambda_{ij}$, where $$T(v_j)=\sum_{i=1}^m\lambda_{ij}w_i.$$
So to compute it, apply $T$ to each element of $B$, write the result as a linear combination of the vectors in $B'$, and write the cofficients in a column of the matrix.
In the second example above, you have a map $T\colon V\to V$, and $V$ has basis $B=([1,1,0],[1,0,1],[0,1,1])$. You have
\begin{align*} T([1,1,0])&=[2,2,0]=2[1,1,0]\\ T([1,0,1])&=[2,1,1]=[1,1,0]+[1,0,1]\\ T([0,1,1])&=[2,1,1]=[1,1,0]+[1,0,1] \end{align*}
so the matrix is
$$R_{B,B}=R_B=\begin{pmatrix}2&1&1\\0&1&1\\0&0&0\end{pmatrix}$$
The row reduction technique gives you a way of computing this starting from
$$R_{B,E}=\begin{pmatrix}2&2&2\\2&1&1\\0&1&1\end{pmatrix}$$
where $E=([1,0,0],[0,1,0],[0,0,1])$ is the standard basis, but you can do it directly as above.