In linear transformations, we note that the eigenvalues of the transformation matrix ($A$) are usually the main point of concern in a typical discussion about transformations, especially for physical system applications. Why is that (why does it always boil down to the eigenvalues)?
It is understandable that, for a transformation matrix $A$, quantities like the trace ($Tr(A)$), the determinant ($Det(A)$), the characteristic determinant ($Det(A-\lambda I)$) and therefore the eigenvalues ($\lambda_{i}$) are invariant under similarity transformations that linearly change the coordinates and therefore these are special geometrical quantities that are independent of coordinates. Indeed, the diagonalization of $A$ by the correct similarity transformation ($S$) to give a diagonalized version $B =S^{-1} A S$ with only $\lambda_{i}$ as its diagonal elements makes that clear. But how about the various elements in any such matrix $A$ that has the same eigenvalues as that for its diagnolized matrix $B$? What about the infinitely many possible matrices $A$ that share the same set of eignevalues with $B$ but are different in their overall elements? Are they redundant descriptions of the same transformation? Do they have any exact use?
Edit: for example, in the study of area-preserving transformations in 2D, we require that the determinant be unity, and then we classify all possible three transformation types (squeeze, rotation, shear) by the nature/values of their eignevalues only (e.g. see Ch-2 in ref)
In many cases, results are stated in terms of eigenvalues and eigenvectors, because many theorems are much more easily stated/proved for diagonal matrices. This is one of the reasons I can think of as to why we spend so much time on diagonalization.
Just because they have the same eigenvalues, it doesn't mean they describe the same transformation. For example, the $2 \times 2$ identity matrix $I$ and $J= \begin{pmatrix} 1& 1 \\ 0 & 1 \end{pmatrix}$ both have the same eigenvalues (repeated eigenvalues of $1$), but they are not similar (one is diagonalizable, the other is not).
You have probably implicitly been thinking that if you modify some other entries of a matrix then it still remains diagonalizable, but as shown by this example, that it not true. What this means is that the eigenvalues alone does not specify the transformation.