What is the relation of the metric matrix with the signature of a Geometric Algebra?

77 Views Asked by At

As far as I know, the metric matrix is used to measure the length of vectors regardless of the chosen basis used to represent them. I read in the book "Álgebra Geométrica e Aplicações" by Fernandes, Lavor & Neto that the diagonalized metric matrix is used to identify the signature of a Geometric Algebra by counting the positive values $p$, negative values $q$, and zeros $r$ along its diagonal.

For example, if I have the matrix $$A = \begin{pmatrix}5 & -3/4 \\ -3/4 & 5/16\end{pmatrix}$$ composed of non-orthogonal basis vectors, how could it be translated into a signature, if possible? I ask this because the diagonalized version of A does not have the commonly found values on its diagonal in definitions of the geometric product between the same vector: $e_i^2=[1, -1,\text{ or }0]$. Does this only apply to orthogonal vectors?

Also, why is it necessary for the chosen matrix representing the signature of a Geometric Algebra to be diagonalized?

2

There are 2 best solutions below

0
On

It is not necessary to have a diagonal matrix to define a Clifford algebra, it is just convenient.

The (eigenvalue) diagonalization won't be composed of $\pm1$'s but the signs will still correspond to the signature.

The issue here is that a linear transformation transforms differently from a bilinear form. If $A$ represents a transformation $v \mapsto Av$ in a basis $B$ and $U$ is a change-of-basis matrix taking taking $v'$ in basis $B'$ to $Uv'$ in basis $B$, the $A$ transforms to $B'$ as $$ A \mapsto U^{-1}AU. $$ However, if $A$ is a bilinear form $(v,w) \mapsto v^TAw$ it must transform as $$ A \mapsto U^TAU. $$ Both of these can be derived easily by considering how vectors get mapped. Now, because $A$ is real-symmetric, if $U$ changes from the "normalized" eigenbasis it will be an "orthogonal matrix". This has nothing to do with orthogonality with respect to $A$! It means "the columns of $U$ are orthonormal with respect to the standard inner product $(v,w) \mapsto v^Tw$." Such a matrix has $U^{-1} = U^T$, so the two transformations above happen to coincide. But once you rescale the eigenbasis so that it is normalized with respect to $A$, the matrix $U$ is no longer "orthogonal" in general, meaning $U^{-1} \ne U^T$. So $U^{-1}AU$ will still be a valid (eigen) diagonalization of $A$, but the $U^TAU$ we are actually interested in will not; it will instead be in the "all $0, +1, -1$" diagonal form.

So in summary, the "normalized" eigenbasis will give you an orthogonal basis of $A$, but not one which is normalized with respect to $A$, and we have the diagonalization $U^{-1}AU$ equal to the transformation we actually want $U^TAU$. Once you normalize with respect to $A$, you still have an eigenbasis but it's not "normalized"; $U^{-1}AU$ is still the diagonalization, but it is not equal to what we actually want $U^TAU$.

0
On

Geometric algebras can also be constructed with dot products that have non-diagonal quadradic form representations

$\mathbf{x} \cdot \mathbf{y} = \mathbf{x}^\text{T} A \mathbf{y},$

such as your example

$A =\begin{bmatrix}5 & -3/4 \\ -3/4 & 5/16\end{bmatrix}.$

With fractions like that, this quadratic form must be associated with some North-American's wood working shop. As the matrix is symmetric, it must have an orthogonal diagonalization. The eigenvalues are

$\begin{aligned}\lambda_1 &= \frac{1}{{32}} \left( {85 + 3 \sqrt{689}} \right) \\ \lambda_2 &= \frac{1}{{32}} \left( {85 - 3 \sqrt{689}} \right) \\ \end{aligned}$

with associated eigenvectors

$\begin{aligned}\mathbf{p}_1 &=\begin{bmatrix}-\frac{1}{{8}} \left( { 25 + \sqrt{689}} \right) \\ 1\end{bmatrix} \\ \mathbf{p}_2 &=\begin{bmatrix}-\frac{1}{{8}} \left( { 25 - \sqrt{689}} \right) \\ 1\end{bmatrix}\end{aligned}.$

These eigenvectors can be orthonormalized

$\begin{aligned}\mathbf{p}_1 &=\begin{bmatrix}-0.988034 \\ 0.154233\end{bmatrix}\\ \mathbf{p}_2 &=\begin{bmatrix}0.154233 \\ 0.988034\end{bmatrix}\end{aligned},$

but it now looks like we have moved to measurements made in a European (metric based) environment.

In general, we may diagonalize a 2x2 symmetric matrix $ A $ if we find the orthonormal eigenvectors $ \left\{ {\mathbf{p}_1, \mathbf{p}_2} \right\} $, such that

$A \mathbf{p}_i = \lambda_i \mathbf{p}_i,$

or with

$\begin{aligned}P &=\begin{bmatrix}\mathbf{p}_1 & \mathbf{p}_2\end{bmatrix} \\ \Sigma &=\begin{bmatrix}\lambda_1 & 0 \\ 0 & \lambda_2\end{bmatrix}\end{aligned},$

for which the diagonal decomposition of $ A $ is

$A = P \Sigma P^\text{T}.$

Such a diagonalization simplifies the computation of the quadratic form

$\mathbf{x} \cdot \mathbf{y} = \mathbf{x}^\text{T} P \Sigma P^\text{T} \mathbf{y} = \left( { P^\text{T} \mathbf{x} } \right)^\text{T} \Sigma \left( { P^\text{T} \mathbf{y} } \right).$

Recall that orthogonal matrix transformations of coordinate vectors $ \mathbf{x} $ given by $ P^\text{T} \mathbf{x} $, are the coordinates of the same vector in the orthonormal eigenvector basis. We can see that by writing

$\begin{aligned}\mathbf{x}&= P P^\text{T} \mathbf{x} \\ &=\begin{bmatrix}\mathbf{p}_1 & \mathbf{p}_2\end{bmatrix}\begin{bmatrix}\mathbf{p}_1^\text{T} \\ \mathbf{p}_2^\text{T}\end{bmatrix}\mathbf{x} \\ &=\begin{bmatrix}\mathbf{p}_1 & \mathbf{p}_2\end{bmatrix}\begin{bmatrix}\mathbf{p}_1^\text{T} \mathbf{x} \\ \mathbf{p}_2^\text{T} \mathbf{x}\end{bmatrix} \\ &=\mathbf{p}_1 \left( { \mathbf{p}_1^\text{T} \mathbf{x} } \right) +\mathbf{p}_2 \left( { \mathbf{p}_2^\text{T} \mathbf{x} } \right).\end{aligned}$

We see that $ \mathbf{p}_i^\text{T} \mathbf{x} $ are the generalized coordinates of the vector $ \mathbf{x} $ in the eigenvector frame. In geometric algebra, we'd typically write that coordinate representation in mixed index notation

$\mathbf{x} = \sum_i \mathbf{p}_i x^i,$

where

$x^i = \mathbf{p}_i^\text{T} \mathbf{x},$

or

$P^\text{T} \mathbf{x} =\begin{bmatrix}x^1 \\ x^2\end{bmatrix}.$

This orthonormal eigenvector basis simplifies the quadratic form expansion of the dot product nicely

$\begin{aligned}\mathbf{x} \cdot \mathbf{y}&=\begin{bmatrix}x^1 & x^2\end{bmatrix}\begin{bmatrix}\lambda_1 & 0 \\ 0 & \lambda_2\end{bmatrix}\begin{bmatrix}y^1 \\ y^2\end{bmatrix} \\ &=\begin{bmatrix}x^1 & x^2\end{bmatrix}\begin{bmatrix}\lambda_1 y^1 \\ \lambda_2 y^2\end{bmatrix} \\ &=\sum_i \lambda_i x^i y^i.\end{aligned}$

Like a conventional dot product, we have a sum of paired products of coordinates, but unlike the conventional Euclidean dot product, we also have weighting factors (the eigenvalues.)

In geometric algebra, the eigenvalues of the quadradic form that is used to define the dot product, usually have values $ \pm 1, 0 $, and those are used to describe the signature. For example, the geometric algebra for a Euclidean space has only 1's, and STA has values $ \pm (1,-1,-1,-1) $ along the diagonal. PGAs and CGAs bring in other values for the diagonal.

One could define a GA that had other eigenvalues. Many of the identities are going to be independent of the choice of quadratic form that is used to represent the dot product. For instance, we still start with the contraction axiom to define the product of a vector with itself

$\mathbf{x}^2 = \mathbf{x} \cdot \mathbf{x}.$

An immediate implication of this is

$\left( { \mathbf{a} + \mathbf{b} } \right)^2 = \left( { \mathbf{a} + \mathbf{b} } \right) \cdot \left( { \mathbf{a} + \mathbf{b} } \right).$

Expanding both sides, we have in turn

$\left( { \mathbf{a} + \mathbf{b} } \right)^2 = \mathbf{a}^2 + \mathbf{b}^2 + \mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a} = \mathbf{a} \cdot \mathbf{a} + \mathbf{b} \cdot \mathbf{b} + \mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a},$

and

$\left( { \mathbf{a} + \mathbf{b} } \right) \cdot \left( { \mathbf{a} + \mathbf{b} } \right) = \mathbf{a} \cdot \mathbf{a} + \mathbf{b} \cdot \mathbf{b} + \mathbf{a} \cdot \mathbf{b} + \mathbf{b} \cdot \mathbf{a}.$

Equating the two, we find the usual identity

$\mathbf{a} \cdot \mathbf{b} = \frac{1}{{2}} \left( { \mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a} } \right).$

This expression for the dot product is not only independent of coordinates, but also holds when the dot product for the space has an arbitrary symmetric quadradic form. When the basis for the space coincides with a set of orthonormal eigenvectors for that quadradic form, the computation of that dot product, given coordinates with respect to that basis, will be particularly simple.

One need not necessarily use an orthonormal basis, nor does one have to assert that the diagonalized metric have only $\pm 1, 0$ eigenvalues.