Recently, the topic of smooth diagonalisation of Riemannian metrics came to my interest. Namely, I've read through this paper from Duke Math. J. Volume 51, Number 2 (1984), 243-260 (unfortunately the document doesn't seem to be publicly available). In Theorem 4.2, they state and prove the following:
Let $(M^3, g)$ be a three-dimensional $C^\infty$ Riemannian manifold. Then there is an atlas of $C^\infty$ coordinate charts for $M$ such that, in each chart, the metric is diagonal, i.e.
$$d s^2 = \lambda_1(x, y, z) d x^2 + \lambda_2 (x, y, z) d y^2 + \lambda_3 (x, y, z) d z^2$$
They furthermore make a similar statement for the case $n = 2$, namely that for two-manifolds, one has always coordinates in which the metric locally takes the form
$$d s^2 = \lambda(x, y) (d x^2 + d y^2)$$
As someone for who Riemannian manifolds are a relatively fresh topic, I'm still confused by how one can understand and interpret this property of diagonalisation and how one can actually diagonalize in practice.
First and foremost, I've been trying to wrap my head around the following: let's take a fixed point $p \in M$ and consider the metric $g_p$ at this point $p$, and let $T(x)$ be it's representation as a matrix, where $x \in T_pM$.
From my understanding, the diagonalisation of $g_p$ would then be equipvalent to finding smooth eigenprojections for the representation matrix $T(x)$.
However, even for a smooth matrix function $T(x)$, the eigenspaces and eigenprojections do not need to behave smoothly or even exist everywhere. I found the following example in this book by Kato, §5.3: Let $n = 2$, and:
$$T(x) = e^{- \frac 1{x^2}} \pmatrix{\cos \frac 2x & \sin \frac 2x \\ \sin \frac 2x & - \cos \frac 2x}, T(0) = 0$$
Then $T(x)$ is continuous and indefinitely differentiable for all real values of $x$, and the eigenvalues of $T(x)$ which turn out to be $\pm e^{- \frac 1 {x^2}}$ (for $x \neq 0$) and $0$ (for $x = 0$) are continuous and indefinitely differentiable aswell.
However, the eigenprojections for $x \neq 0$ in this case are:
$$ \pmatrix{\cos^2 \frac 1x & \cos \frac 1x \sin \frac 1x \\ \cos \frac 1x \sin \frac 1x & \sin^2 \frac 1x}, \pmatrix{ \sin^2 \frac 1x & - \cos \frac 1x \sin \frac 1x \\ - \cos \frac 1x \sin \frac 1x & \cos^2 \frac 1x}$$
These matrix functions are continuous and indefinitely differentiable on any interval that doesn't contain $0$, but they cannot be continued to $x = 0$ as continuous functions. Also, one can show that there doesn't exist any eigenvector of $T(x)$ that is continuous in the neighborhood of $x = 0$ and doesn't vanish in $x = 0$.
Sorry for this rather tedious example, but now my question is: Why is this not a contradiction to the diagonalisation? If according to Deturck and Yang all Riemannian metrics are locally diagonalizable for $n = 2, 3$, then why are there matrix functions like these that are not smoothly diagonalizable? Do these metric diagonalisations not correspond to finding the respective eigenprojection functions of the local metrix representation, and if so, why not? Or if there something else that I'm missing?
Any help would be much appreciated – maybe I'm just missing something very simple. I've been trying to wrap my head around this now for some days but without success.
As Moishe pointed out, you need to be careful to make the distinction between orthogonal/isothermal coordinates and orthogonal frames - the former is much stronger, while the latter always exist locally (by Gram-Schmidt). However, I think the crux of your question (the non-equivalence of a diagonalizing frame and frames aligned to the eigenspaces) still stands.
I believe the key issue is the distinction between the usual matrix diagonalization by the similarity relation $g = PDP^{-1}$, and the diagonalization by the congruence relation $g = PDP^T$, which is what's going on here: we're diagonalizing a symmetric bilinear form, without any requirement that the diagonalizing basis is orthonormal. (Indeed, since we're attempting to diagonalize the metric, we don't have any natural "background" inner product.)
For similarity diagonalization, the diagonalizing basis $P$ is generically unique, and thus the regularity questions that Kato's book investigates are very natural. Congruence diagonalization of symmetric bilinear forms by orthogonal transformations is equivalent to similarity diagonalization (since $P^T = P^{-1}$ for orthogonal $P$), but if you're not making this restriction on $P$ then the situation is highly non-unique: if $g = PDP^T$ is positive definite, we can write this in the canonical form $g = (P \sqrt D) I (P \sqrt D)^T$, and in fact we have $g = (P \sqrt D O) I (P \sqrt D O)^T$ for any orthogonal $O.$ Thus the fact that $g$ is congruence-diagonalized in a given frame does not imply that this frame is aligned with the eigenprojections of $g$.
For a concrete example, consider the symmetric positive-definite matrix $$g=\left(\begin{array}{cc} 2 & 1\\ 1 & 2 \end{array}\right),$$ which has orthogonal diagonalization $$ g=\left(\begin{array}{cc} \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{array}\right)\left(\begin{array}{cc} 3 & 0\\ 0 & 1 \end{array}\right)\left(\begin{array}{cc} \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{array}\right)^{-1}$$
but also many other congruence-diagonalizations, including
$$g=\left(\begin{array}{cc} 1 & 0\\ \frac{1}{2} & 1 \end{array}\right)\left(\begin{array}{cc} 2 & 0\\ 0 & \frac{3}{2} \end{array}\right)\left(\begin{array}{cc} 1 & 0\\ \frac{1}{2} & 1 \end{array}\right)^{T}.$$
Thus it shouldn't be too surprising that we can smoothly diagonalize a parametrized bilinear form in this sense, since we have a lot more room to move. Indeed, we always have existence of local orthonormal frames - you just take the usual Gram-Schmidt construction of an orthonormal basis, apply it to a smooth local frame and check that everything is smooth.