Geometric/Clifford algebra - intuition behind this derivation of a formula that recovers a rotor from a vector basis and its transformation

67 Views Asked by At

This short manuscript by Francisco G. Montoya presents and proves a general formula that gives the rotor (up to a sign change) that effects a certain rotation transformation, from a given vector basis and its rotated image. Although I could follow the steps, I can't see how one would come up with summations (3) and (6) as starting points out of the blue. Is there a certain intuition that could take us from the defining equations of a rotor transformation ($u \to R u R^\dagger$) to the summations presented?

The author begins with summation (3), given by

$$\sum_{k=1}^n \sigma_k R^\dagger \sigma_k \quad,$$

where the $\sigma_k$ are the original, non-rotated basis vectors, and

$$R^\dagger = \alpha - \sum_{i<j}^n \beta_{ij} \sigma_{ij} \quad,$$

just the definition of a rotor.

He shows that the first summation is

$$\sum_{k=1}^n \sigma_k R^\dagger \sigma_k = 4 \alpha + (n-4) R^\dagger \quad.$$

He then forms

$$\sum_{k=1}^n\mu_k \sigma _k = R \sum_{k=1}^n \sigma_k R^\dagger \sigma_k = 4\alpha R + (n-4) \quad,$$

where the $\mu_k$ are the rotated $\sigma_k$. So

$$4\alpha R = \sum_{k=1}^n\mu_k \sigma _k + (4-n) \quad,$$

which means $R$ is a scalar multiple of $\sum_{k=1}^n\mu_k \sigma _k + (4-n)$ (remember that $\alpha$ is the scalar part of $R$). Enforcing the constraint that $\lVert R \rVert = 1$ means that

$$R = \pm \frac{\sum_{k=1}^n\mu_k \sigma _k + (4-n)}{\lVert \sum_{k=1}^n\mu_k \sigma _k + (4-n) \rVert} \quad.$$

(I included the $\pm$).

My question boils down to what thought process sensibly connects a rotor's definition and action on a vector to conjuring up the summation

$$\sum_{k=1}^n \sigma_k R^\dagger \sigma_k \quad,$$

which is the key to the whole proof. I could see how one might come up with the second summation, $\sum_{k=1}^n \mu_k \sigma_k$, after having been presented the first one, but it still looks like a fairly non-obvious development.

1

There are 1 best solutions below

0
On

Here is a somewhat weird way to see this. I don't consider this intuitive, so this is more of a long comment than an answer.

Write $R = \alpha + B$; then $$ \sum_k\mu_kR^\dagger\mu_k = \nabla R^\dagger x = n\alpha - \nabla Bx = n\alpha - (n-4)B = nR^\dagger - 4(R^\dagger - \alpha) = (n-4)R^\dagger + 4\alpha $$ The reason the first equality holds is because the basis $\sigma_k$ is its own reciprocal, because it is orthonormal. For any basis $e_k$ with reciprocal $e^k$ and any bilinear function $F({-},{-})$ we always have $$ \sum_k F(e^k, e_k) = F(\nabla, x). $$

If $A_r$ has grade $r$ and is constant, the identities used above look like $$ \nabla x = n,\quad \nabla A_rx = (n-2r)\hat A_r $$ where $\hat A_r$ is grade involution. The last one can be proved using the more basic identity $$ \nabla(x\cdot A_r) = (A_r\cdot\nabla)x = rA_r $$ and then by differentiating $$ x\cdot A_r = \frac12(xA_r - \hat A_rx). $$ These identities can be found starting on page 51 in Clifford Algebra to Geometric Calculus by Hestenes and Sobczyk.


Actually, this shows us that the formula is more general. Let $\mu_k$ and $\sigma_k$ be arbitary (not necessarily orthonormal) and let $\mu^k$ and $\sigma^k$ be their reciprocals. Then $\mu_k = R\sigma_kR^\dagger$ and $\mu^k=R\sigma^kR^\dagger$. Now consider $$ \sum_k\mu^k\sigma_k = \sum_kR\sigma^kR^\dagger\sigma_k = R\nabla R^\dagger x = 4\alpha R + (n-4) $$ so finally $$ R \propto 4 - n + \sum_k\mu^k\sigma_k. $$ We can actually turn this into a matrix computation. Let $G_{ij} = \mu_i\cdot\mu_j$ be the Gram matrix for $\mu_k$, and express $\sigma_k, \mu_k$ in terms of an orthonormal basis $e_k$ (like the standard basis) so that we get matrices with columns $\Sigma = (\sigma_1,\dotsc,\sigma_n)$ and $U = (\mu_1,\dotsc,\mu_n)$. Form the matrix $$ \Gamma = UG^{-1}\Sigma^T. $$ Then the scalar part of $R$ is $$ 4 - n + \mathrm{Tr}(\Gamma) $$ and the bivector part is $$ \sum_{i<j}(\Gamma - \Gamma^T)_{ij}e_ie_j. $$