Understanding why a symmetric matrix can always be diagonalised

173 Views Asked by At

I'm reading through some general relativity notes. I have reached a part that I don't understand, probably because my linear algebra is not good enough.

enter image description here

I don't really understand what the maths is showing me:

I can accept the first line:

$g_{\mu' \nu'} = \Lambda^{\mu}_{\: \mu'} \: g_{\mu \nu} \: \Lambda^{\nu}_{\: \nu'}$

But then in the next line:

$g_{\mu' \nu'} = (\Lambda^{T})_{\mu'}^{\: \mu} \: g_{\mu \nu} \: \Lambda^{\nu}_{\: \nu'}$

I'm already not sure why we are changing to a transposed matrix, but I continue to see where this is going...

Then this becomes:

$g' = \Lambda^{T} \: g \: \Lambda$

Now I'm lost. Where did the indices go? Is it because of contractions? If so, I can see that on the RHS but not on the LHS. And aren't there still lower indices on each of the $\Lambda$s?

So my main questions are:

  1. Can anyone please explain this last step?
  2. Why does any of this tell us that "g is a symmetric matrix and can always be diagonalized" as the text says?

I'm not a mathematician so high level explanation (as far as possible) would be much appreciated!

1

There are 1 best solutions below

0
On

At a point $p$ on your manifold, the coordinate matrix of the metric tensor $g_{ij} = g_p(e_i, e_j)$ is symmetric since inner products are by definition symmetric ($g_p(u, v) = g_p(v, u)$). The text doesn't explain why it is true that if $A$ is a symmetric $n \times n$ matrix with real entries, then $\mathbb{R}^n$ has an orthonormal basis consisting of eigenvectors of $A$. This is the content of the spectral theorem from linear algebra. The proof is quite short; see proposition 3.3.2 on page 101 of https://mtaylor.web.unc.edu/notes/linear-algebra-notes/

An overview of the proof:

Let $A$ be a symmetric $n \times n$ matrix with real entries. This means $A^T = A$.

Step 1. Use the complexification of $A$ and the fundamental theorem of algebra to show that $A$ has an eigenvector $u$. Normalize $u$ so that $|u| = 1$.

Step 2: Show that if $U$ is a subspace of $\mathbb{R}^n$ and $A : U \to U$, then $A : U^{\perp} \to U^{\perp}$. Since $u$ is an eigenvector of $A$, we have $A : \text{span}(u) \to \text{span}(u)$. Hence $A : \text{span}(u)^{\perp} \to \text{span}(u)^{\perp}$.

By induction, the $n - 1$ dimensional space $\text{span}(u)^{\perp}$ has an orthonormal basis $u_2, \dots, u_n$ consisting of eigenvectors of $A$. Hence $u, u_2, \dots, u_n$ is an orthonormal basis of $\mathbb{R}^n$ consisting of eigenvectors of $A$.