Finding the rotation matrix in n-dimensions

18.1k Views Asked by At

Suppose that we know two real vectors with n components, which are linked by some arbitrary transformation/scaling/rotation/shearing...

Now, I think that it is possible to know which is the scaling matrix and the rotation matrix. For example, the scaling matrix would be a diagonal matrix with n entries representing the n scaling factors.

On the other side, I can normalize the two vectors and then compute the rotation matrix between the two, isn't it?

1) How can I retrieve the rotation matrix? (if it is possible, obviously)

2) I need such matrix in order to use it in a computational mechanics context. Then, how would you find it in an operative way?

I have been reading somewhere that in multiple dimensions we can apply rotations that are combination of rotations in n-2 hyperplanes. Is this a convenient way to compute such matrix?

Thank you very much!!

EDIT: Let us put this way, in order to understand better what I should do...

Vector $x= [ 2, 4, 5, 3, 6 ]^T$ and vector $y= [ 6, 2, 0, 1, 7 ]^T$. I would like to find the rotation matrix that aligns vector x to vector y. First of all, I understood that one need to find the base of the orthogonal complement to the two vectors, i.e. find the hyperplane containing such two vectors. Consequently, one first compute the null space:

  • Reducing rows: $$A= \left[ \begin{array}{ccccc} 1 & 0 & -1/2 & -1/10 & 4/5 \\ 0 & 1 & 3/2 & 4/5 & 11/10 \\ \end{array} \right] $$

  • Vectors of orthogonal space (linked to $x_3, x_4, x_5$) $$v_1^{\perp}= \left[ \begin{array}{c} 1/2 \\ -3/2 \\ 1 \\ 0 \\ 0 \\ \end{array} \right], v_2^{\perp}= \left[ \begin{array}{c} 1/10 \\ -4/5 \\ 0 \\ 1 \\ 0 \\ \end{array} \right], v_3^{\perp}= \left[ \begin{array}{c} -4/5 \\ -11/10 \\ 0 \\ 0 \\ 1 \\ \end{array} \right] $$

  • Then one can do Gram-Schmidt over the two groups of vectors, the one representing the plane containing the two initial vectors and the one representing the orthogonal subspace. The resulting matrix presents in the column vectors the basis of the space:

$$E= \left[ \begin{array}{ccccc} 0.7255 & -0.0117 & 0.2673 & -0.0716 & -0.6301 \\ 0 & 0.4429 & -0.8018 & -0.2409 & -0.3209 \\ -0.3627 & 0.6701 & 0.5345 & -0.3255 & -0.1663 \\ -0.0725 & 0.3555 & 0 & 0.9115 & -0.1937 \\ 0.5804 & 0.4778 & 0 & 0 & 0.6594 \\ \end{array} \right] $$

The last matrix is correct and represent a base because obviously the scalar product between two columns i,j is equal to the Kronecker delta $\delta_{i,j}$.

Now, I have information about almost everything, I can compute the angle between the two vectors saying that

$$ \cos(\theta)=\frac{x \cdot y}{\left|| x \right|| \: \left|| y \right||} $$

but now how do I construct the rotation matrix?

5

There are 5 best solutions below

11
On BEST ANSWER

One way to do this is to find two orthonormal vectors in the plane generated by your two vectors, and then extend it to an orthonormal basis of $\mathbb R^n$. Then with respect to this basis, consider the rotation by angle $\theta$ in the plan generated by the first two vectors, and the identity on the space generated by the rest of the orthonormal basis. Use Gram-Schmidt to find the orthonormal basis.

As you said in a previous comment, you cannot rotate around an axis except in 3D. Rather you need to rotate about an $n-2$-dimensional subspace.

So suppose you want to rotate $x$ to $y$, and you happen to know they are the same norm. Let $u = x/|x|$, and $v = (y-(u.y)u)/|y-(u.y)u|$. Then $P=uu^T + vv^T$ is a projection onto the space generated by $x$ and $y$, and $Q=I-uu^T-vv^T$ is the projection onto the $n-2$-dimensional complemented subspace. So the "rotation" part just has to take place on the range of $P$. That is, $z \mapsto (z.u,z.v)$ is a isomorphic isometry of the range of $P$ to $\mathbb R^2$. Do the rotation on $\mathbb R^2$. Then map this back to $\mathbb R^n$ by $[a,b]^T \mapsto au + bv$.

So the whole rotation is $$ I - uu^T -vv^T + [u\ v]R_\theta[u\ v]^T $$ where $$ R_\theta = \begin{bmatrix}\cos(\theta) & -\sin(\theta) \\ \sin(\theta & \cos(\theta) \end{bmatrix} $$ where $\cos(\theta) = x\cdot y/(|x||y|)$.

(By the way, if a reflection matrix is OK, then consider Householder transformations instead.)

Here is some MATLAB code (but tested using octave so maybe it has syntax errors in matlab).

x=[2,4,5,3,6]';

y=[6,2,0,1,7]';

u=x/norm(x);

v=y-u'*y*u;

v=v/norm(v);

cost=x'*y/norm(x)/norm(y);

sint=sqrt(1-cost^2);

R = eye(length(x))-u*u'-v*v' + [u v]* [cost -sint;sint cost] *[u v]';

fprintf('testing\n');

should_be_identity = R*R'

should_be_y = R*x
5
On

To find the rotation matrix:

  1. Find the plane $P$ containing the two vectors and the origin (always possible).

  2. Compute the angle between the two vectors. Call this angle $\alpha.$

  3. Rotate by $\alpha$ around the axis which is the normal vector to $P.$

EDIT For more detail see: Link

7
On

Are you familiar with quaternions? Clifford algebra offers a good $N$-dimensional generalization to quaternions.

Let me take your example. You have two vectors $x$ and $y$:

$$\begin{align*} x &= 2e_1 + 4e_2 + 5e_3 + 3e_4 + 6 e_5 \\ y &= 6 e_1 + 2 e_2 + 0 e_3 + 1e_4 + 7 e_5\end{align*}$$

With quaternions, one would compute a cross product to find the rotation axis. Using clifford algebra, one directly computes the plane of rotation using the wedge product of vectors, as one would do in exterior algebra. (A general term for such geometric primitives is blade. Vectors are just 1-dimensional blades; here, we will compute a 2-dimensional blade.)

The wedge product is

$$\begin{align*}x \wedge y = (4 - 24) e_{12} &+ (0 -30) e_{13} + (2 - 18) e_{14} + (14 - 36) e_{15} \\ +(0 - 10) e_{23} &+ (4 - 6) e_{24} + (28 - 12) e_{25} \\ + (5 - 0) e_{34} &+ (35 - 0) e_{35} \\ + (21 - 6) e_{45} & \end{align*}$$

You should appreciate how direct this is compared to finding the null space of a matrix; when you're finding the null space, you're finding the orthogonal complement to this blade; the wedge product allows us to proceed directly instead.

Anyway, to proceed as one would with quaternions, we must made the blade unit. This is similar to the vector case--simply sum the squares of the coefficients and divide by the square root. I will not carry out this arithmetic here.

Once one has the normalized blade $\hat B$, one can construct the versor $R = \exp(-\hat B \theta/2)$. The minus sign is conventional in clifford algebra, to correspond to counterclockwise rotation. This is different from how quaternions usually do.

The versor $R$ is then used to construct a rotation map $\underline R$ that takes in any vector $a$ and produces its rotated counterpart:

$$\underline R(a) = Ra R^{\dagger} = e^{-\hat B \theta/2} a e^{\hat B \theta/2}$$

The quantity $\hat B$ obeys $\hat B^2 = -1$, and so this generates the trigonometric functions through power series, just as in quaternions.

The products here are the geometric product of clifford algebra. This can be computed in terms of basis vectors like so: for any $i, j = 1, 2, \ldots, N$, set $e_i e_i = +1$, and when $i \neq j$, $e_i e_j = - e_i e_j$.

For instance, say that $R$ takes the form $\alpha - \beta e_{12}$ and $a = a^1 e_1 + a^2 e_2$. The computation would then look like

$$\begin{align*}\underline R(a) &= (\alpha - \beta e_1 e_2)(a^1 e_1 + a^2 e_2)(\alpha + \beta e_1 e_2) \\ &= (\alpha a - \beta a^1 e_1 e_2 e_1 - \beta a^2 e_1 e_2 e_2)(\alpha + \beta e_1 e_2) \\ &= (\alpha a + \beta a^1 e_2 - \beta a^2 e_1) (\alpha + \beta e_1 e_2) \\ &= \ldots \end{align*}$$

(I won't finish this calculation; I just hoped to illustrate how geometric products are computed.)

This is the most direct rotation that connects the two directions. There could be other, subsequent rotations that rotate other vectors orthogonal to these while leaving these two intact, so there are several rotation maps that map these vectors as you specified.

Once you have the rotation map $\underline R$, you can extract the matrix components by plugging in basis vectors and taking dot products. $\underline R_{12} = e_1 \cdot \underline R(e_2)$, for instance. But if you actually go to all the trouble to write something that can compute this rotation in terms of clifford algebra, it makes one wonder why you would even bother with a matrix. There is simply no need.

A special use of this clifford algebra formulation is that, using the geometric product, it can be used to rotate higher-dimensional blade, not just vectors. Indeed, for a general blade $K$, the formula is just

$$\underline R(K) = R K R^\dagger$$

This is a special property of rotations. If you have a direct expression for the blade $K$, you need not bother decomposing it into vectors or rotate those vectors individually to recover the image of $K$. This is handy.

0
On

Here's a super easy way to find a rotation matrix $R$ that rotates $u$ to align with $v$ in $N$ dimensions. One feature of this approach is that $R$ does not rotate vectors outside the subspace of $u$ and $v$.

1
On

One simple approach is to create a vector from the two normalized vectors $$\eqalign{ \def\LR#1{\left(#1\right)} \def\fracLR#1#2{\left(\frac{#1}{#2}\right)} \def\c#1{\color{red}{#1}} n_x = \frac{x}{\|x\|},\qquad n_y = \frac{y}{\|y\|},\qquad \c{c = n_x + n_y} \\ }$$ and use it to construct a (negated) Householder reflection $$\eqalign{ H &= \left(\frac{\c{2cc^T}}{\c{c^Tc}} - I\right) \;=\; H^T \\ I &= H^TH, \qquad \det(H) = (-{\tt1})^{n+1} \\ }$$ It is easy to verify that $H$ performs the desired (possibly improper) rotation $$\eqalign{ Hn_x = n_y \quad\iff\quad Hn_y = n_x \\\\ }$$ Another approach constructs a slightly more complicated proper rotation $$\eqalign{ R &= 2n_yn_x^T - H \\ I &= R^TR, \qquad \det(R) = {\tt1} \\ Rn_x &= n_y \quad\iff\quad R^Tn_y=n_x \\ }$$