Maybe this question could sound silly, but after carefully re-reading my linear algebra notes one particular detail catch my sight.
Let $V$ a $\Bbb K$-vector space, $\dim(V) = n$, $f$ an $\textbf{endomorphism}$ is said to be diagonalizable if exist a base $\mathcal{B}$ of $V$ s.t. $f$ representative matrix with respect to $\mathcal{B}$ is a diagonal matrix.
Till there nothing strange is the definition of diagonalization, but as far as I'm concerned there are multiple ways to define some application $g: V \to W$ with $\dim(V) = \dim(W)$, and the representative matrix of $g$ is a square matrix.
The question is: we want $f$ to be an endomorphism because otherwise there isn't nothing relevant to say even tough algorithmically speaking we could make the same eigenvalue/eigenvector calculation over any square matrix, or since $V,W$ have the same dimension are isomorphic as vector spaces and we are basically working with endomorphism? My guess is that the second reasoning could makes sense since (using the little bit of understanding I have of category theory) let $F$ be a $W,V$ isomorphism and $g:V\to W$ $$\require{AMScd} \begin{CD} V @>{g}>> W \\ @V{id_{V}}VV @V{F}VV \\ V @>{Fg}>> V \end{CD}$$
This is a commutative diagram ad $Fg$ is an endomorphism. Hope for some clarification, thank you.
Edit
After some responses, the possibility to choose different basis for $V,W$ lead to every homomorphism to have a diagonal form (this isn't the case considering only endomorphism). In the situation descibed above can we deduce some information (eigenvalues/eigenvectors,...) from $g$ that can be transported to $Fg$?
According to Wikipedia, when we speak of diagonalizable linear maps (or matrices) in linear algebra, we are always talking about endomorphisms in a finite-dimensional vector space over a field (or square matrices with coefficients in a field). However, the term diagonal matrix can be used to refer to non-square matrices or matrices over an arbitrary ring.
I think I know why, not all endomorphisms in a finite-dimensional vector space are diagonalizable in the form $P^{-1}AP$ where $P$ is some invertible matrix. But when we talk about homomorphisms from one vector space (free module) $X$ into another vector space (free module) $Y$ (the matrix can be square or non-square), we can change the matrix to $Q^{-1}AP$ by a change of basis, where $P,Q$ are invertible matrices over the ring or field. And over a field, any (square or non-square) matrix $A$ can be brought into the block form $$\begin{bmatrix} I&\\&0 \end{bmatrix}$$ with only $1$'s and $0$'s on the diagonal by the operation $Q^{-1}AP$, and over the ring of integers (I don't know if this holds in an arbitrary commutative ring), any (square or non-square) matrix $A$ can be brought into the diagonal form $$\begin{bmatrix} d_1&&&\\&d_2&&\\&&\ddots&\\&&&0 \end{bmatrix}$$ by the operation $Q^{-1}AP$. Therefore, if we allow non-square matrices (or allow homomorphisms with different domain and codomain even if they are of the same dimension), then every matrix is diagonalizable, and there's no need to talk about it anymore.