From section 47 called Similarity:
(In the following I will represent matrices like $[A]$ and linear transforms as $A$ and also sorry if I am not rigorous enough)
Halmos proves that when we have one linear transformation $T:V\longrightarrow V$ with matrix $[B]$ in a basis $X$ (vectors $\ \vec x_1, \vec x_2, .., \vec x_n$) and matrix $[C]$ in basis $Y$ (vectors $\ \vec y_1, \vec y_2, .., \vec y_n$) and the two bases are related by $[A]x_i=y_i$, then the two matrices are related by $[C]=[A]^{-1}[B][A]$.
He also proves that when we have two linear transformations $B$ and$C$ and $[B]=(β_{ij})$ is a matrix and the two transformations are defined as $B\vec x_j= \sum_{i=0}^nβ_{ij}\vec x_i$ and $C\vec y_j= \sum_{i=0}^nβ_{ij}\vec y_i$, the two transformations are related by $C=ABA^{-1} $.
While I have proved these things, I can't intuitively(geometrically) understand why the difference in the relation of the transformations with the relation with the matrices , since a matrix is a way to express the transformation on a coordinates system(please correct me if I am wrong as I am not a mathematician).
Also, which of the two relations are used when somebody deals with change of basis?
Trying to understand these concepts through a rotation matrix $[A]$ and a projection matrix $[B]$, I figured out that in the first case(relation between matrices), the matrix $[C]$ is presented this way in order to again project to the same plane as $[B]$ did but it just has to have different matrix elements in order for it to work in the new basis $Y$ and I suppose that is why Halmos calls the two matrices similar. But, I can't figure out such an intuitive and geometrical explanation or example of how the second case with the relation between linear transformations works and thus, I can't explain why the two transformations are called similar.
EDIT:
I understood why $[C]=[A]^{-1}[B][A]$ but I didn't understand why $C=ABA^{-1} $ and why this difference between the two relation exists.
Well, to see an operator $T $ on $V$ finite dimensional space as a matrix you need first fix a basis on $V$, right?
So, it is allowed change the basis you've chosen, but now we expect that the matrix representing $T$ should change as well.
The role this matrix $[A]$ you cited plays is to change the coordinates for you.
If you fix the basis $\mathcal{B}$ on $V$ you get a matrix $[B]$ representing $T$, then every time you want to evaluate $T$ on a vector $v$, you can see $v$ as a coordinator vector writing it according to $\mathcal{B}$ and extracting the coefficients, right? Finally, you just multiply $[B]v$.
Now, if you want to represent $T$ according to a new basis $\mathcal{B}'$ you must know how $T$ acts on $\mathcal{B}'$. But if want to express this operation as a product of matrices you must regard a vector $v$ as linear combination of vector in $\mathcal{B}'$. So, if $[A]$ change the coordinates of a vector written according to $\mathcal{B}'$ to $\mathcal{B}$, $[A]^{-1}$ change the coordinates of a vector written according to $\mathcal{B}$ to $\mathcal{B}'$.
With this in mind, you can see the product $[A]^{-1}[B][A]v$ this way:
This is how I see the operation you've mentioned.