I am looking at Computer Vision Models by Prince, chapter 5 - multivariate Gaussian distributions:

I am not exactly sure how he gets that result in the second line in the exponent.
We know that if the matrix is symmetric, and $\Sigma_{\text{full}}$ is, then we can rewrite it as $\Sigma_{\text{full}}=Q\Sigma_{\text{diag}}^{'}Q^T$ or in their notation: $R\Sigma_{\text{diag}}^{'}R^T$, we know that for orthonormal matrices (and $R$ is such): $R^T = R^{-1}$. So t he inverse: $\Sigma_{\text{full}}^{-1} = (R\Sigma_{\text{diag}}^{'}R^T)^{-1}=R\Sigma_{\text{diag}}^{' -1}R^T$, so it should be this midle term and not what they have? Am I incorrect?
EDIT: So my comments went bye-bye with a deleted answer. I will write here. So in the book they say there are three types of covariance matrices: 1) Spherical, where the main diagonal variances are non-negative and constant; so you can wrtie such a covariance matrix as a positive multiple of the identity matrix; 2) Diagonal, where the main diagonal variances can differ; 3) Full, where you can have also values in off-diagonal entries; This is what is giving the "tiltidness" to the ellipsoids in N-dimensions. However, you can tilt your frame of reference and then make the full matrix a diagonal one!
And my claim was that since $\Sigma_{\text{full}}$ is symmetric I can rewrite it as $Q \Sigma_{\text{full}}Q^T$, where $Q$ is orthonormal so that $Q^{-1}=Q^T$, and also the determinant is 1, because geometrically speaking if we have some area in the original frame of reference, this area WILL NOT scale in the new perspective, thus determinant is 1. But then we have the problem that I described in this question, for some reason, my reformulation does not agree with their statement, by the different location of the transpose. And so I concluded, that it must be that $Q^T=R$

The author is using very nonstandard notation that perhaps makes this more confusing than it should be. Normally, $R$ is used for the Cholesky factor of a covariance matrix $\Sigma$. Here, $R$ is an orthogonal matrix of eigenvectors of $\Sigma$ such that
$\Sigma=R\Sigma_{diag}R^{T}$
and
$\Sigma_{diag}=R^{T}\Sigma R$.
It isn't clear what the author means by $\Sigma_{diag}'$. This might be the transpose of the matrix $\Sigma_{diag}$, or it might simply distinguish this covariance matrix from another covariance matrix. Using ' for the transpose wouldn't make much sense because the superscript T is used elsewhere. I'll assume that the prime in $\Sigma^{'}_{diag}$ doesn't denote a transpose.
Since R is orthogonal, $R^{-1}=R^{T}$.
In the original expression, you have
$(Rx)^{T}\Sigma_{diag}^{'-1}(Rx)=x^{T}(R^{T}\Sigma_{diag}^{'-1}R)x$.
Since the inverse product is the product of the inverses in reverse order,
$(R^{T}\Sigma_{diag}^{'}R)^{-1}=R^{T}\Sigma_{diag}^{'-1}R$.
Thus
$(Rx)^{T}\Sigma_{diag}^{'-1}(Rx)=x^{T}(R^{T}\Sigma_{diag}^{'}R)^{-1}x$.