How to we square unit vectors to 1,-1, or 0? What does ℜ( p, q, r) means?

191 Views Asked by At

My Geometric Algebra book starts with the following:

enter image description here

Which I did not understand a bit, and went on with the book hoping for it to be further explained. It turns out that doesn't happen! I'm reading the book by myself, and I can't seem to find anything similar to what the author says.

  1. What does he exactly mean by squaring a vector? How$^1$ can it be $0$ or negative?

  2. This may sound way off, but does it have to do with metrics as those used in relativity? (That's something I am a bit more comfortable with).

  3. Any sources to read more on it?

I do not even know exactly the topic's.

Thanks for any input you may share!

Footnotes:

1] Later on in the book, the product aa or ab starts do be defined. Up to the point where the inner product is defined as $a\cdot b = |a||b|Cos \left( \measuredangle \left( a,b \right) \right) $, assuming that $a^2 \in \mathbb{R}$ is $a^2 =\pm |a|^2 $ where $|a|$ is the mag. of the vector $a$. Perhaps it has to do with that.

1

There are 1 best solutions below

1
On BEST ANSWER

Let's start with the question of what does it mean to square a vector.

The workhorse axiom of geometric algebra is the contraction axiom, which specifies that the square of a vector $ \mathbf{x} $ satisfies the rule $$ \mathbf{x}^2 = \mathbf{x} \cdot \mathbf{x}.$$ With this axiom you can start to give meaning to other multivector (sums of scalars, vectors, or products of vectors) quantities. For example, given any unit vector $ \hat{\mathbf{u}} $, we have $$ \hat{\mathbf{u}}^2 = \hat{\mathbf{u}} \cdot \hat{\mathbf{u}} = 1.$$ Given two orthonormal vectors $ \hat{\mathbf{u}}, \hat{\mathbf{v}} $, we can form vector products such as $$ \hat{\mathbf{u}} \left(\hat{\mathbf{u}} \hat{\mathbf{v}}\right) = \left( { \hat{\mathbf{u}} \hat{\mathbf{u}} } \right) \hat{\mathbf{v}} = \hat{\mathbf{v}},$$ $$ \hat{\mathbf{v}} \left(\hat{\mathbf{v}} \hat{\mathbf{u}}\right) = \left( { \hat{\mathbf{v}} \hat{\mathbf{v}} } \right) \hat{\mathbf{u}} = \hat{\mathbf{u}},$$ Operationally, this gives meaning to a (bivector) quantity like $ \hat{\mathbf{u}} \hat{\mathbf{v}} $ acts through multiplication on $ \hat{\mathbf{u}} $ and $ \hat{\mathbf{v}} $ (from the right in this case) serves to rotate $ \hat{\mathbf{u}} $ towards $ \hat{\mathbf{v}} $, and we see that the $ \hat{\mathbf{v}} $ when multiplied by the bivector $ \hat{\mathbf{v}} \hat{\mathbf{u}} $ is rotated 90 degrees towards $ \hat{\mathbf{u}} $. The contraction axiom can readily be applied to extract other important relationships. For example, again, given two orthonormal vectors $ \hat{\mathbf{u}}, \hat{\mathbf{v}} $, we have $$\begin{aligned}\left( {\hat{\mathbf{u}} + \hat{\mathbf{v}}} \right)^2 &= \left( {\hat{\mathbf{u}} + \hat{\mathbf{v}}} \right) \cdot \left( { \hat{\mathbf{u}} + \hat{\mathbf{v}} } \right) \\ &= \hat{\mathbf{u}} \cdot \hat{\mathbf{u}} + \hat{\mathbf{v}} \cdot \hat{\mathbf{v}} \\ &= 2,\end{aligned}$$ however, we also have $$\begin{aligned}\left( {\hat{\mathbf{u}} + \hat{\mathbf{v}}} \right)^2 &=\left( {\hat{\mathbf{u}} + \hat{\mathbf{v}}} \right) \left( {\hat{\mathbf{u}} + \hat{\mathbf{v}}} \right) \\ &=\hat{\mathbf{u}} \hat{\mathbf{u}} + \hat{\mathbf{u}} \hat{\mathbf{v}} + \hat{\mathbf{v}} \hat{\mathbf{u}} + \hat{\mathbf{v}} \hat{\mathbf{v}} \\ &= 2 + \hat{\mathbf{u}} \hat{\mathbf{v}} + \hat{\mathbf{v}} \hat{\mathbf{u}}.\end{aligned}$$ We must have $$\hat{\mathbf{u}} \hat{\mathbf{v}} = - \hat{\mathbf{v}} \hat{\mathbf{u}}.$$ Having posited this rule for squaring a vector, we find that a product of two orthonormal vectors anticommutes. We can build on this bit of knowledge and look at other vector products, such as $$\begin{aligned} \left( { \hat{\mathbf{u}} \hat{\mathbf{v}} } \right)^2 = \left( { \hat{\mathbf{u}} \hat{\mathbf{v}} } \right) \left( { \hat{\mathbf{u}} \hat{\mathbf{v}} } \right) = -\left( { \hat{\mathbf{v}} \hat{\mathbf{u}} } \right) \left( { \hat{\mathbf{u}} \hat{\mathbf{v}} } \right) =-\hat{\mathbf{v}} \left( { \hat{\mathbf{u}} \hat{\mathbf{u}} } \right)\hat{\mathbf{v}} =-\hat{\mathbf{v}} \hat{\mathbf{v}} =-1.\end{aligned}$$ We see that the product of two orthonormal vectors squares to $ -1 $, like the square of a complex imaginary. All this from the contraction axiom that specifies a rule for the square of a vector (plus assumptions of multiplicitive distribution, but not commututivity). Like the complex imaginary where $ z i $ rotates $ z $, you can show that such a bivector product rotates vectors that fall within the plane spanned by those two vectors. Unlike complex numbers, we get a different rotational sense depending on whether we multiply the vector on the right or on the left.

You asked what the square of a vector was, and strictly speaking, I could have stopped with the definition. However, I hope that the discussion after that helps show why we care about the square of a vector. That little axiom allows us to extract a significant amount of information about vector products, and start down the path of extracting geometric interpretation from the algebra.

As for the question of a negative or 0 vector square, you are right that one of the ways that can occur is in a relativistic context. In special relativity, we form vectors like $$x = c t \mathbf{e}_0 + \mathbf{x},$$ where $ \mathbf{e}_0 $ is a "time-like" basis vector and $ \mathbf{x} $ is a space like vector. There are a variety of notations for such a space-time vector, such as $$ x = (ct, \mathbf{x}),$$ or, for example, using a Dirac basis, $$ x = x^\mu \gamma_\mu = x^0 \gamma_0 + x^1 \gamma_1 + x^2 \gamma_2 + x^3 \gamma_3.$$ For all such notations, there will always be a dot product defined on the four vector space of the form: $$x \cdot x = \pm \left( { (ct)^2 - \mathbf{x} \cdot \mathbf{x} } \right),$$ where the sign varies according to the convention used by the author. Given such a dot product (i.e. a metric associated with the basis), we can define a geometric algebra over the four-vector space that is also based on the contraction axiom, giving meaning to the square of a four-vector. For example, using the Dirac basis, we have $$\begin{aligned} x^2 &= \left( { x^\mu \gamma_\mu } \right) \cdot \left( x^\nu \gamma_\nu \right) \\ &= x^\mu x^\nu \left( { \gamma_\mu \cdot \gamma_\nu } \right) \\ &= x^\mu x^\nu g_{\mu\nu} \\ &= x^\mu x_\mu.\end{aligned}$$ The square of a vector, in the conventional tensor formalism, is just the contraction of that vector with itself. If we choose $ \gamma_0 \cdot \gamma_0 = -\gamma_k \cdot \gamma_k = 1 $ as our metric, then a timelike vector such as $ c t \gamma_0 $, has a positive square $$ \left( { c t \gamma_0 } \right)^2 = (c t)^2 \ge 0,$$ and a space like vector $ x^k \gamma_k $, has a negative square $$ \left( { x^k \gamma_k } \right)^2 =-\sum_{k=1}^3 \left( {x^k} \right)^2 \le 0.$$ On the other hand, we can form "light-like" vectors that have a zero square such as $ x = \gamma_0 + \gamma_1 $: $$\begin{aligned} x^2 &= \left( { \gamma_0 + \gamma_1 } \right)^2 \\ &= \left( { \gamma_0 + \gamma_1 } \right) \cdot \left( { \gamma_0 + \gamma_1 } \right) \\ &= \gamma_0^2 + \gamma_1^2 \\ &= +1 - 1 \\ &= 0.\end{aligned}$$

In special relativity, we see that we have the possibility of vectors that square to +1, -1, or 0, but our basis vectors only ever have $ \pm 1 $ values. The signature of the metric is the values down the diagonal of $ g_{\mu\nu} $, usually one of $ +1,-1,-1,-1$ or $ -1,+1,+1,+1$. However, there are geometric algebra domains (conformal and projective geometric algebras) where there is value to having basis vectors that may be null, along with any additional basis vectors with non-zero squares.

I'd recommend the book "Geometric Algebra for Physicists" (Doran and Lasenby) for a thorough treatment of geometric algebra in a relativisitic context.