As far as I know, the metric matrix is used to measure the length of vectors regardless of the chosen basis used to represent them. I read in the book "Álgebra Geométrica e Aplicações" by Fernandes, Lavor & Neto that the diagonalized metric matrix is used to identify the signature of a Geometric Algebra by counting the positive values $p$, negative values $q$, and zeros $r$ along its diagonal.
For example, if I have the matrix $$A = \begin{pmatrix}5 & -3/4 \\ -3/4 & 5/16\end{pmatrix}$$ composed of non-orthogonal basis vectors, how could it be translated into a signature, if possible? I ask this because the diagonalized version of A does not have the commonly found values on its diagonal in definitions of the geometric product between the same vector: $e_i^2=[1, -1,\text{ or }0]$. Does this only apply to orthogonal vectors?
Also, why is it necessary for the chosen matrix representing the signature of a Geometric Algebra to be diagonalized?
It is not necessary to have a diagonal matrix to define a Clifford algebra, it is just convenient.
The (eigenvalue) diagonalization won't be composed of $\pm1$'s but the signs will still correspond to the signature.
The issue here is that a linear transformation transforms differently from a bilinear form. If $A$ represents a transformation $v \mapsto Av$ in a basis $B$ and $U$ is a change-of-basis matrix taking taking $v'$ in basis $B'$ to $Uv'$ in basis $B$, the $A$ transforms to $B'$ as $$ A \mapsto U^{-1}AU. $$ However, if $A$ is a bilinear form $(v,w) \mapsto v^TAw$ it must transform as $$ A \mapsto U^TAU. $$ Both of these can be derived easily by considering how vectors get mapped. Now, because $A$ is real-symmetric, if $U$ changes from the "normalized" eigenbasis it will be an "orthogonal matrix". This has nothing to do with orthogonality with respect to $A$! It means "the columns of $U$ are orthonormal with respect to the standard inner product $(v,w) \mapsto v^Tw$." Such a matrix has $U^{-1} = U^T$, so the two transformations above happen to coincide. But once you rescale the eigenbasis so that it is normalized with respect to $A$, the matrix $U$ is no longer "orthogonal" in general, meaning $U^{-1} \ne U^T$. So $U^{-1}AU$ will still be a valid (eigen) diagonalization of $A$, but the $U^TAU$ we are actually interested in will not; it will instead be in the "all $0, +1, -1$" diagonal form.
So in summary, the "normalized" eigenbasis will give you an orthogonal basis of $A$, but not one which is normalized with respect to $A$, and we have the diagonalization $U^{-1}AU$ equal to the transformation we actually want $U^TAU$. Once you normalize with respect to $A$, you still have an eigenbasis but it's not "normalized"; $U^{-1}AU$ is still the diagonalization, but it is not equal to what we actually want $U^TAU$.