I have two bases for $\Bbb{R^2}$, $C:=\{(2,-1)^T,(6,-2)^T\}$ and $B:=\{(-6,-1)^T,(2,0)^T\}$. To find the change of basis matrix $P_{B\to C}$ we row reduce the system $$\begin{bmatrix}2&6&-6&2 \\-1&-2&-1&0\end{bmatrix}$$
until we have $$\begin{bmatrix}1&0&9&-2 \\0&1&-4&1\end{bmatrix}$$
which gives us the coordinates of the basis vectors of $B$ with respect to basis $C$ on the columns of the rightwise $2\times 2$ matrix, i.e $[b_1]_C$ and $[b_2]_C$ - these are the columns of the change of basis matrix $P_{B\to C}$. I understand some of the connections here, the basis vectors of $C$ are just linear combinations of the natural basis of $\Bbb{R^2}$ - call it $E$. Thus the matrices of $C$ and $E$ are row equivalent. Why does the same sequence of row operations change the coordinates of the basis vectors of $B$ into $[b_1]_C$ and $[b_2]_C$?
It seems like there are two questions here, one about forming a change-of-basis matrix from two other matrices, and one about the mechanics of the specific method being used.
Taking the first one first, recall the definition of the coordinates of a vector $\mathbf v$ relative to some ordered basis $\mathcal B=\{\mathbf b_i\}$: they are the coefficients $a_i$ of the basis vectors in the unique linear combination $\mathbf v = a_1\mathbf b_1+\cdots+a_n\mathbf b_n$. We generally collect these coefficients into an $n$-tuple of scalars that your text denotes by $[\mathbf v]_{\mathcal B} = a_1[\mathbf b_1]_{\mathcal B}+\cdots+a_n[\mathbf b_n]_{\mathcal B}\in\mathbb F^n$, where $\mathbb F$ is the field over which the vector space is defined. I’ll call this a $\mathcal B$-tuple for brevity.
Now let $$M = \begin{bmatrix}[\mathbf b_1]_{\mathcal C}&\cdots&[\mathbf b_n]_{\mathcal C}\end{bmatrix},$$ that is, the matrix with columns equal to the coordinate tuples of the elements of $\mathcal B$ relative to some other basis $\mathcal C$. Since $[\mathbf b_j]_{\mathcal B}$ is just the $j$th column of the identity matrix, we have $$M[\mathbf v]_{\mathcal B} = a_1[\mathbf b_1]_{\mathcal C}+\cdots+a_n[\mathbf b_n]_{\mathcal C}.$$ This is a linear combination of $\mathcal C$-tuples, so is itself a $\mathcal C$-tuple, namely, $[\mathbf v]_{\mathcal C}$. Thus, $M=P_{\mathcal B\to\mathcal C}$. Since $M^{-1}M=I$, it should also be clear that $M^{-1}$ maps $[\mathbf b_j]_{\mathcal C}$ to $[\mathbf b_j]_{\mathcal B}$, so $P_{\mathcal C\to\mathcal B} = M^{-1}$.
We can also perform this change of basis in two steps, by first mapping to the standard basis, i.e., $$P_{\mathcal B\to\mathcal C} = P_{\mathcal E\to\mathcal C}P_{\mathcal B\to\mathcal E} = \begin{bmatrix}[\mathbf c_1]_{\mathcal E} & \cdots & [\mathbf c_n]_{\mathcal E}\end{bmatrix}^{-1} \begin{bmatrix}[\mathbf b_1]_{\mathcal E} & \cdots [\mathbf b_n]_{\mathcal E}\end{bmatrix}.$$ In your case, this is $C^{-1}B$, with $$B=\begin{bmatrix}-6&2\\-1&0\end{bmatrix}, C=\begin{bmatrix}2&6\\-1&-2\end{bmatrix}.$$
As to the second question regarding computing $C^{-1}B$ via row-reduction, remember that every elementary row operation corresponds to left-multiplication by a particular invertible matrix, and so the entire process of row-reduction is equivalent to left-multiplication by some invertible matrix $E$. If the matrix $C$ is invertible, its RREF is the identity matrix, i.e., $EC=I$, from which we have $E=C^{-1}$. Because of the way matrix multiplication works, if we augment $C$ and reduce it to its RREF, then whatever is on the right side also gets multiplied by $C^{-1}$: $$\left[C\mid B\right] \to C^{-1}\left[C\mid B\right] = \left[I\mid C^{-1}B\right],$$ which is exactly what was needed for $P_{\mathcal B\to\mathcal C}$. Comparing this to your specific case, the reduced augmented matrix is $$\left[\begin{array}{cc|cc}1&0 & 9&-2 \\ 0&1 & -4&1\end{array}\right]$$ so $P_{\mathcal B\to\mathcal C}$ is the submatrix on the right side.
Note that matrix inversion is a special case of this method in which we augment with the identity matrix: $$\left[C\mid I\right] \to C^{-1}\left[C\mid I\right] = \left[I\mid C^{-1}\right].$$