Given the bases a = {(0,2),(2,1)} and b = {(1,0),(1,1)} compute the change of coordinate matrix from basis a to b.
Then, given the coordinates of z with respect to the basis a as (2,2), use the previous question to compute the coordinates of z with respect to the basis b.
The way I understood the first part was that I have to multiply the vectors of b by the coordinates of the vectors of a to compute the change of coordinate matrix from a to b. This gives me the following matrix: \begin{bmatrix}2&3\\2&1\end{bmatrix}
For the second part, I then have to take the inverse of the matrix I got from above and then multiply it by the coordinates of z to get the coordinates of z with respect to the basis b. The inverse of the matrix is: \begin{bmatrix}-1/4&3/4\\1/2&-1/2\end{bmatrix} which I then multiply by (2,2) to get the coordinates of z with respect to basis b
I am not sure that is correct however.
What you can do is use the changes of coordinates between each basis and the standard basis. So, you want: $A=\begin{pmatrix}0&2\\2&1\end{pmatrix}$ and $B=\begin{pmatrix}1&1\\0&1\end{pmatrix}$.
Then what is normally called the change of basis matrix from $a$ to $b$ would be the matrix that takes vectors written in terms of $b$ and returns them written in terms of $a$. So we get the matrix $C$, where $C=A^{-1}B$.
For the second part apply $C^{-1}$ to $z$.