Why is the inner product of two simple vectors simple? (Geometric algebra)

74 Views Asked by At

I’m trying to understand the reason for the assertion on page 20 of Hestenes and Sobczyk’s “Clifford Algebra to Geometric Calculus” that

If $B$ is a simple $s$-vector, then $B\cdot A$ [where $A$ is a simple $n$-vector] is simple.

According to page 4, a multivector $A_r$ is called a simple $r$-vector iff it can be factored into a product of $r$ anticommuting vectors $a_1, a_2,…, a_r$, that is

$$ A_r = a_1a_2…a_r,$$

where $a_ja_k = -a_ka_j$ for $j, k = 1, 2, …, r$, and $j\neq k$.

However, I can’t see how that is true even if $B$ were a simple $1$-vector $b=b_1+b_2+b_3$ and $A=a_1a_2a_3$ a simple $3$-vector with $b_i$ parallel $a_i$. In this case,

$$\begin{aligned}b\cdot A &= (b_1\cdot a_1)a_2a_3 - a_1(b_2\cdot a_2)a_3 + a_1a_2(b_3\cdot a_3) \\ &= (b_1\cdot a_1)a_2a_3 + a_1\left[a_2(b_3\cdot a_3) - (b_2\cdot a_2)a_3\right] \end{aligned}$$

and I can’t factor this further into a simple $2$-vector.

2

There are 2 best solutions below

2
On BEST ANSWER

$ \newcommand\form[1]{\langle#1\rangle} \newcommand\lcontr{\mathbin\rfloor} $This can be proved by induction on grade, but instead I am going to give a "vector free" approach. Assume we have an $n$-dimensional vector space $V$ equipped with a nondegenerate metric which generates a geometric algebra. (When the metric is degenerate we can still make the arguments to follow work, but we have to do some shenanigans with the dual space $V^*$.)

First, notation: your inner product $\cdot$ can be defined on $s$- and $t$-vectors $A_s, B_t$ by $$ A_s\cdot B_t = \form{A_sB_t}_{|s-t|}. $$ However, the left contraction $$ A_s\lcontr B_t = \form{A_sB_t}_{t-s} $$ is more well behaved (where the grade projection is defined to be $0$ when $t-s$ is negative). For instance, for arbitrary multivectors $A, B, C$ we have $$ (A\wedge B)\lcontr C = A\lcontr(B\lcontr C),\quad (A\wedge B)*C = A*(B\lcontr C) $$ with $A*B = \form{AB}_0$ the scalar product. The second adjoint identity can be taken as a definition of the contraction when the metric is nondegenerate. In light of the first identity we make $\wedge$ tighter-binding than $\lcontr$ and make $\lcontr$ right-associative so that we may write $$ A\wedge B\lcontr C = A\lcontr B\lcontr C. $$

Crucially, contraction also satisfies the following dualities for any pseudoscalar $I$ $$ A\lcontr(BI) = (A\wedge B)I,\quad A\wedge(BI) = (A\lcontr B)I $$ This can be proved from the adjoint identity $$\begin{aligned} C*[A\lcontr(BI)] & = (C\wedge A)*(BI) = \form{(C\wedge A)BI}_0 = \form{(C\wedge A)B}_nI \\& = (C\wedge A\wedge B)I = \form{C(A\wedge B)I}_0 \\& = C*[(A\wedge B)I]. \end{aligned}$$ Since the scalar product is nondegenerate whenever the underlying metric is, this proves one of the dualities. The other is proved simply by replacing $I$ with $I^{-1}$ and $B$ with $BI$.

Now consider when $A, B$ are blades. $BI$ is a blade: we can find an orthogonal basis $e_1,\dotsc,e_n$ such that $$ B = be_ke_{k-1}\dotsb e_1,\quad I = e_1e_2\dotsb e_n $$ for some scalar $b$. Now by duality $$ A\lcontr B = [A\wedge(BI)]I^{-1}. $$ This is clealy a blade.


Here is a more geometric perspective.

First, a basic identity. Let $T : V \to V$ be linear. This map extends uniquely to an outermorphism on the exterior algebra $$ T(A\wedge B) = T(A)\wedge T(B). $$ If $a, b$ are vectors, then the adjoint $\bar T$ of $T$ is defined by $$ \bar T(a)*b = a*T(b). $$ You can find that the adjoint of the outermorphism is the outermorphism of the adjoint, so this equation extends to multivectors. Now consider that $$\begin{aligned} C*T(\bar T(A)\lcontr B) & = \bar T(C)*(\bar T(A)\lcontr B) = (\bar T(C)\wedge\bar T(A))*B \\& = \bar T(C\wedge A)*B = (C\wedge A)*T(B) \\& = C*(A\lcontr T(B)) \end{aligned}$$ and thus $$ T(\bar T(A)\lcontr B) = A\lcontr T(B). $$ I justify this in terms of subspaces further below.

Now consider the case that $B$ is a blade and $T = P_B$, the orthogonal projection onto the subspace of $V$ represented by $B$. It is easy to prove that $\bar P_B = P_B$; thus $$ P_B(P_B(A)\lcontr B) = A\lcontr B. \tag{$*$} $$ This proves two things:

  1. $A\lcontr B$ is in the image of $P_B$, so it represents a linear combination of subspaces of $B$.
  2. This means we can in fact remove the outer $P_B$ in ($*$) and obtain $$ A\lcontr B = P_B(A)\lcontr B. $$

Now suppose $A$ is also a blade. We already showed that $AI$ is a blade as well; geometrically, this corresponds to taking the orthogonal complement of $A$. It is easy to show that $$ A\lcontr I = AI. $$ But $B$ is a pseudoscalar for the subspace it represents, and $P_B(A)$ is blade contained in this subspace. Thus $$ A\lcontr B = P_B(A)\lcontr B $$ is a blade. In fact, this proves the following geometric interpretation of the contraction: if $[X]$ is the subspace represented by a blade $X$ then $$ [A\lcontr B] = \begin{cases} V &\text{if }\exists v \in [A].\: v\perp[B],\\ [A]^\perp\cap[B] &\text{otherwise}. \end{cases} $$ So the contraction is essentially relative orthogonalization. Note that $$ P_B([A])^\perp\cap B = [A]^\perp\cap B. $$


We can justify the adjoint equation $$ T(\bar T(A)\lcontr B) = A\lcontr T(B) $$ more geometrically. Consider a fixed vector $v$ and arbitrary $w \in T(v)^\perp$: $$ \bar T(v)\cdot w = 0 = v\cdot T(w). $$ What this is saying is that $\bar T$ is the unique map (up to scaling of some sort) such that $$ T(\bar T(v)^\perp) \subseteq v^\perp \quad\text{or equivalently}\quad \bar T(T(v)^\perp) \subseteq v^\perp. $$ If $S$ is a subspace then this generalizes to $$ T(\bar T(S)^\perp) \subseteq S^\perp $$ with equality when $T$ (and hence $\bar T$) are bijective. You can see this as follows: $$ T(\bar T(S)^\perp) = T(\bigcap_{v \in \bar T(S)}v^\perp) = T(\bigcap_{v \in S}\bar T(v)^\perp) \subseteq \bigcap_{v \in S}T(\bar T(v)^\perp) \subseteq \bigcap_{v \in S}v^\perp = S^\perp, $$ with equality in the case of bijectivity following from dimension counting. A direct consequence is the restriction to relative orthogonal complements $S^\perp\cap R$: $$ T(\bar T(S)^\perp\cap R) \subseteq S^\perp\cap T(R). $$ This is precisely the analog of $$ T(\bar T(A)\lcontr B) = A\lcontr T(B) $$ with $A$ playing the role of $S$ and $B$ the role of $R$.

0
On

This is not a full answer, but instead a discussion of your example. I'd have to think further about how to prove the assertion, but somebody else may beat me to that.

Your bivector example, say, $B = b \cdot A$ can be factored into a pair of orthogonal vectors in a number of ways. This is always going to be possible in a 3D subspace like the one you have used for your example. One technique would be to form pairs of vectors by dotting the bivector with any non-zero vector in the plane spanned by basis formed by factors of $B$. For example:

$ \begin{aligned} v_1 &= B \cdot a_1 \\ v_2 &= B \cdot v_1 \end{aligned} $

both of these vectors lie in the plane and are perpendicular by construction ($v_1$ is proportional to the projection of $a_1$ onto $B$, but rotated 90 degrees, and $v_2$ proportional to a further 90 rotation of $v_1$.)

You can verify that $B \propto v_1 v_2$.

This is a bit hard to see with your example, as stated, but you can verify it computationally easily enough

Mathematica computational example.

I have a few other examples of this sort of factorization as a problem in chapter I (solutions are available at the end of chapter) in my book, a free PDF copy of which is available at Geometric Algebra for Electrical Engineers

Geometrically, the essentially 3D example you have posed can also be viewed in terms of duality. Specifically, a vector can be expressed as the dual of a bivector (a simple 2-vector) that in turn can be expressed as a product of two perpendicular vectors, as illustrated here:

three perpendicular vectors

We can write:

$b \propto v_1 v_2 I,\qquad b \cdot v_1 = b \cdot v_2 = v_1 \cdot v_2 = 0$

(i.e.: $b \propto v_1 \times v_2$), just as we can write

$T = a_1 a_2 a_3 \propto I$,

where $I$ is the unit pseudoscalar for the 3D subspace spanned by $a_1, a_2, a_3$.

Let

$b = \beta v_1 v_2 I$,

and

$T = \alpha I$,

leaving

$b \cdot T = (\beta v_1 v_2 I)(\alpha I) = -\alpha \beta v_1 v_2.$

We see explicitly, that $b \cdot T$ is simple, as it is proportional to the product of two orthogonal vectors.