Well, this is from Aluffi's Algebra Chapter $0$. In this textbook, a finitely presented module is a coker of a homomorphism between finitely generated free module. And then we study the module by studying the homomorphism. To study the homomorphsim between finitely generated free module, we could study the matrix corresponding to it.
But then I came cross this propostion which confused me a lot. Looking at first part of his proof,I find that he use the fact that equivalent matrices represent the same homomorphism. But how could he change the basis? When we say a matrix represent a homomorphism, we mean the homomorphism induced by matrix multiplication, right? That is, the basis is the standard basis. But if we are allowed to change the basis freely, what does it mean when we say a matrix represents a homomorphism, thus a module?

Suppose you have a map $M : R^n \to R^m$ where we interpret $M$ as an $m \times n$ matrix. Performing a sequence of elementary row and column operations corresponds to multiplying $M$ as $PMQ$ where $P : R^m \to R^m$ and $Q : R^n \to R^n$ ($P$ is the row operations, $Q$ the column operations). Elementary row/column operations can be reversed so $P$ and $Q$ are invertible.
Let $e_1,\dots,e_m$ and $f_1,\dots,f_n$ be the standard bases on $R^m$ and $R^n$ respectively. Since $P, Q$ are invertible, $Pe_1,\dots,Pe_m$ and $Q^{-1}f_1,\dots,Q^{-1}f_n$ are also bases of $R^m$ and $R^n$ respectively. Then $PMQ$ gives a map from $R^m \to R^n$ in this basis. Not the same map as $M$, but with an isomorphic cokernel.
Specifically, if $M = (m_{i,j})$ is the matrix that maps $f_j$ to $\sum_{i = 1}^m m_{i,j}e_j$ then $PMQ$ is the matrix that maps $Q^{-1}f_j$ to $\sum_{i = 1}^m m_{i,j}Ae_i$. If we call $M' = PMQ$, $f_j' = Q^{-1}f_j$ and $e_i' = Ae_i$ then we can write this as
$$ Mf_j = \sum_{i = 1}^m m_{i,j}e_j \text{ and } M'f_j' = \sum_{i = 1}^m m_{i,j}e_j'. $$
So you can see the maps are related.
We have a commutative diagram where the vertical maps are isomorphisms: $\require{AMScd}$ \begin{CD} R^n = \bigoplus_{j = 1}^n Rf_j @>{M}>> R^m = \bigoplus_{i = 1}^m Re_i\\ @V{Q^{-1}}VV @VV{P}V \\ R^n = \bigoplus_{j = 1}^n Rf_j' @>{M'}>> R^m = \bigoplus_{i = 1}^m Re_i' \end{CD}
For example, consider the map $M : \mathbf Z^2 \to \mathbf Z^2$ defined by the matrix $$ \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}. $$ This map takes the standard basis $e_1, e_2$ and doubles one of the basis vectors and triples the other. The cokernel of $M$ is $$\mathbf Z^2/\operatorname{im}(M) = (\mathbf Ze_1 + \mathbf Ze_2)/(2\mathbf Ze_1 + 3\mathbf Ze_2) \cong \mathbf Z/2\mathbf Z \oplus \mathbf Z/3\mathbf Z.$$
Now let's say I have another basis of $\mathbf Z^2$ such as $e_1' = 2e_1 + e_2$ and $e_2' = 7e_1 + 4e_2$. Now we don't have $Me_1' = 2e_1'$ and $Me_2' = 3e_2'$, but if we define a matrix $M'$ to have this property, i.e. $M'e_1' = 2e_1'$ and $M'e_2' = 3e_2'$, then $M'$ is equivalent to $M$. So $M$ and $M'$ are different maps but they do the same thing to their respective bases. We have
$$ \operatorname{coker} M' = \mathbf Z^2/\operatorname{im}(M') = (\mathbf Ze_1' + \mathbf Ze_2')/(2\mathbf Ze_1' + 3\mathbf Ze_2') \cong \mathbf Z/2\mathbf Z \oplus \mathbf Z/3\mathbf Z. $$
In fact, we don't need $M'$ to be from the prime basis to itself. The matrix $M''$ defined by $M''e_1 = 2e_1'$ and $M''e_2 = 3e_2'$ is also equivalent as is $M'''$ defined by $M'''e_1' = 2e_1$ and $M'''e_2' = 3e_2$. For instance
$$ \operatorname{im}(M'') = M''(\mathbf Ze_1 + \mathbf Ze_2) = 2\mathbf Ze_1' + 3\mathbf Ze_2'. $$
Therefore,
$$ \operatorname{coker}(M'') = (\mathbf Ze_1' + \mathbf Ze_2')/(2\mathbf Ze_1' + 3\mathbf Ze_2') \cong \mathbf Z/2\mathbf Z \oplus \mathbf Z/3\mathbf Z.$$