How can I calculate a $4\times 4$ rotation matrix to match a 4d direction vector?

5.8k Views Asked by At

I have two 4d vectors, and need to calculate a $4\times 4$ rotation matrix to point from one to the other.

edit - I'm getting an idea of how to do it conceptually: find the plane in which the vectors lie, calculate the angle between the vectors using the dot product, then construct the rotation matrix based on the two. The trouble is I don't know how to mechanically do the first or last of those three steps. I'm trying to program objects in 4space, so an ideal solution would be computationally efficient too, but that is secondary.

5

There are 5 best solutions below

3
On BEST ANSWER

Here's a different approach. Let the two vectors be $u$ and $v$, presumably they have the same length (otherwise no such rotation exists). Let $v'$ be a vector gotten from $v$ by flipping the sign of one its coordinates. I do it in two steps. First I find a reflection $S$ that maps $u\mapsto v'$. Then we multiply the result with a diagonal matrix $D$, where all the entries save one are $+1$, and there is a single $-1$ at the position of the flipped coordinate. As the determinants of both $S$ and $D$ are $-1$, and, being orthogonal reflections, they both obviously preserve the lengths of the vectors, the product $DS$ will thus be in $SO(4)$ (determinant one, and preserves lengths).

How to find $S$? This is easy. Let $n=v'-u$ be the difference vector. All we need to is to reflect w.r.t. the hyperlane $H$ that has $n$ as a normal. The formula for this reflection is $$ S(x)=x-2\,\frac{(x,n)}{(n,n)}n. $$ To find the matrix of $S$ all you need to do is to calculate the images of the basis vectors using that formula.

Numerical instabilities may happen, if $n$ is very short. If that happens, flip a different coordinate instead.

0
On

Consider the plane $P\subset \mathbf{R}^4$ containing the two vectors given to you. Calculate the inner product and get the angle between them. Call the angle $x$. Now there is a 2-d rotation $A$ by angle $x$ inside $P$. And consider identity $I$ on the orthogonal complement of $P$ in $\mathbf{R}^4$. Now $A\oplus I$ is the required 4d matrix

7
On

Use clifford algebra. You have two vectors $a$ and $b$ so that you want to rotate $a$ to $b$. Compute the bivector that the vectors reside in, $B = (a \wedge b)$. Normalize this bivector: $\hat B = B/\sqrt{|B^2|}$. Note that $\hat B^2 = -1$. Compute the rotation angle $\theta$ by $a \wedge b = |a||b| \hat B \sin \theta$.

There is a rotor $R = \exp(-\hat B \theta/2) = \cos \frac{\theta}{2} - \hat B \sin \frac{\theta}{2}$, and the full rotation can be computed by

$$\underline R(c) = R c R^{-1}$$

for any vector $c$. Compute the components by plugging in basis vectors.


Edit: I will work an example that, hopefully, convinces you of the usefulness of this method.

Let $a = e_1$ and $b = e_3 + e_4$ be two vectors. In clifford algebra, we have a wedge product $(\wedge)$ that is anticommutative (like the cross product) but whose result is not a vector. The result is instead called a bivector, and this object is suitable for describing planes in an $N$-dimensional space.

The wedge product of $a$ and $b$ is $B = a \wedge b$ and is written out in components as

$$B = a \wedge b = e_1 \wedge (e_3 + e_4) = e_1 \wedge e_3 + e_1 \wedge e_4$$

That's all there is to computing the wedge product. I will write this for brevity as $B = e_1 e_3 + e_1 e_4$ however. This is legal using the geometric product, defines as

$$ab = a \cdot b+ a \wedge b$$

The geometric product is useful because it contains all the information about whether a vector is parallel to another or perpendicular (or how much it's parallel and perpendicular) both in the same product. On a practical level, computations with the geometric product in a basis look like this. Let $i, j \in \{1, 2, 3, 4\}$, and we have

$$e_i e_j = \begin{cases} 1, & i = j \\ -e_j e_i, & i \neq j\end{cases}$$

Along with associativity, distributivity over addition, all the usual convenient stuff.

It is through the geometric product that we can compute the magnitude of $B$:

$$B^2 = e_1 (e_3 + e_4) e_1 (e_3 + e_4) = -e_1 e_1 (e_3 + e_4) (e_3 + e_4) = -2$$

That this squares to a negative number is actually quite important. Taking an exponential will result in the usual trig functions coming out of power series, and that's critically important. I dare say it is why we use trig functions in Euclidean space. Using bivectors, you get stuff that would ordinarily need an imaginary unit, but in an entirely real space!

So our normalized bivector $\hat B = e_1 (e_3 + e_4)/\sqrt{2}$, and that's fine. Here, I picked two vectors that are already orthogonal, so the angle between them must be $\pi/2$. If they weren't already orthogonal, then you could do $a \cdot b = |a| |b| \cos\theta$ as per usual.

All we need to do now to calculate the rotation is to take the exponential of the bivector.

$$R = \exp(-\hat B \pi/4) = \cos \frac{\pi}{4} - \hat B \sin \frac{\pi}{4} = \frac{1}{\sqrt{2}} - \frac{e_1 (e_3 + e_4)}{2}$$

The half-angle use of $\pi/4$ instead of $\pi/2$ is important for reasons I have no time to get into, but if you're familiar with quaternions, it should be no surprise.

The final rotation comes out to

$$\underline R(c) = R c R^{-1} = \left( \frac{1}{\sqrt{2}} - \frac{e_1 e_3 + e_1 e_4}{2} \right) c \left( \frac{1}{\sqrt{2}} + \frac{e_1 e_3 + e_1 e_4}{2} \right)$$

for any vector $c$. For the sake of demonstration, I will choose $c = e_1$ to show how the computation works. Again, these products are geometric, using the rules I outlined above. We start by just plugging that in:

$$\underline R({\color{green}{e_1}}) =\left( \frac{1}{\sqrt{2}} - \frac{e_1 e_3 + e_1 e_4}{2} \right) {\color{green}{e_1}} \left( \frac{1}{\sqrt{2}} + \frac{e_1 e_3 + e_1 e_4}{2} \right)$$

Through associativity, move $\color{green}{e_1}$ into the brackets on the left.

$$\underline R(\color{green}{e_1}) =\left( \frac{e_1}{\sqrt{2}} - \frac{e_1 e_3 \color{green}{e_1} + e_1 e_4 \color{green}{e_1}}{2} \right) \left( \frac{1}{\sqrt{2}} + \frac{e_1 e_3 + e_1 e_4}{2} \right)$$

$e_1 e_3 e_1 = -e_1 e_1 e_3 = -e_3$ by associativity and anticommutivity of orthogonal vectors. The same logic applies to $e_1 e_4 e_1$ to get

$$\underline R(e_1) =\left( \frac{e_1}{\sqrt{2}} + \frac{e_3 + e_4 }{2} \right) \left( \frac{1}{\sqrt{2}} + \frac{e_1 e_3 + e_1 e_4}{2} \right)$$

Now we just have to distribute and multiply.

$$\underline R(e_1) =\frac{e_1}{2} + \frac{e_3 + e_4 }{2\sqrt{2}} + \frac{{\color{red} {e_1 e_1}} e_3 + {\color{red} {e_1 e_1}} e_4}{2 \sqrt{2}} + \frac{e_3 e_1 e_3 + {\color{blue} {e_3 e_1 e_4 + e_4 e_1 e_3}} + e_4 e_1 e_4}{4}$$

Again, ${\color{red} {e_1 e_1}} = 1$, so that simplifies the third term. Note that ${\color{blue} {e_3 e_1 e_4 = - e_4 e_1 e_3}}$ (this takes 3 swaps, so it overall picks up a minus sign), so those terms cancel, and we get

$$\underline R(e_1) =\frac{e_1}{2} + \frac{e_3 + e_4 }{\sqrt{2}} + \frac{- 2e_1}{4} = \frac{e_3 + e_4}{\sqrt{2}}$$

As desired.

Clifford algebra may be unfamiliar, but it's a very powerful language for doing geometric computations. There's already a great module for doing computations in python (using sympy), where it's referred to as geometric algebra for a good reason. GA, as it's called, lends itself to an object-oriented approach to geometry. All you need is the ability to program the products of the algebra, and you're off and running.


Edit edit: The form of the final rotation can be simplified somewhat.

$$\underline R(c) = c \cos \theta - \hat B (\hat B\wedge c) (1-\cos \theta) + (c \cdot \hat B) \sin \theta$$

where $c \cdot \hat B = (c \hat B - \hat B c)/2$ is the vector part of $c\hat B$, and $\hat B \wedge c = (\hat B c + c \hat B)/2$ is the trivector part of $\hat B c$. This is the clifford algebra analogue to the Rodrigues formula, but using these particular products, it is valid in all dimensions.

0
On

Let the two vectors be $u$ and $v$. Normalise them to unit vectors. Put the two column vectors together to form a $4\times 2$ matrix $A=(u,v)$. Perform an orthogonal diagonalisation $AA^T=QDQ^T$. Permute the diagonal entries of $D$ so that they are arranged in descending order. Permute the columns of $Q$ accordingly. Then $Q^T$ is a real orthogonal matrix that maps $u$ and $v$ to the $xy$-plane, so that $Q^TA$ is of the form $\pmatrix{x_1&x_2\\ y_1&y_2\\ 0&0\\ 0&0}$. Define two $2\times2$ rotation matrices $R_1=\pmatrix{x_1&-y_1\\ y_1&x_1}$ and $R_2=\pmatrix{x_2&-y_2\\ y_2&x_2}$. Then the desired rotation matrix can be taken as $R=Q\pmatrix{R_2R_1^T\\ &I_2}Q^T$ and we have $Ru=v$.

Remark. Instead of using orthogonal diagonalisation, you may also compute the $Q$ in the above by singular value decomposition: just compute the SVD $A=Q\Sigma P^T$ and take the $Q$.

6
On

There are many ways to do this, but perhaps the simplest way is to use quaternions: Let the the two (nonzero) vectors be $\mathbf{p},\mathbf{q}\in\mathbb{R}^4=\mathbb{H}$, and set $$ \mathbf{u} = \frac{\overline{\mathbf{p}}\mathbf{q}}{|\overline{\mathbf{p}}\mathbf{q}|}. $$ Since $\mathbf{u}$ is a unit quaternion, the $4$-by-$4$ matrix $R_{\mathbf{u}}$ that represents right multiplication by $\mathbf{u}$ is an orthogonal matrix. Now, by the properties of quaterion multiplication, $$ R_{\mathbf{u}}\mathbf{p} = \mathbf{p}\mathbf{u} = \frac{|\mathbf{p}|}{|\mathbf{q}|}\ \mathbf{q}, $$ and the right hand side is a positive scalar multiple of $\mathbf{q}$.