Given $\|x_1\|=\|x_2\|$, then there is an orthogonal matrix $\Gamma$ such that $x_2=\Gamma x_1$.

125 Views Asked by At

Let $x_1, x_2$ be members of $\mathbb R^p$ such that $\|x_1\|=\|x_2\|$. Then there is an orthogonal matrix $\Gamma$ such that $x_2=\Gamma x_1$.

How to prove the above? I know given $x_2=\Gamma x_1$, since any orthogonal matrix preserves the length of the vector, thus $\|x_1\|=\|x_2\|$. But I don't know how to prove the converse direction?

3

There are 3 best solutions below

10
On BEST ANSWER

The reflection through the hyperplane orthogonal to $x_1 - x_2$ and passing through the origin maps $x_1 \mapsto x_2$ and $x_2 \mapsto x_1$. This reflection can be written $$ \Gamma x = x - 2(x_1-x_2)\frac{(x_1-x_2)^Tx}{||x_1-x_2||^2} $$ implying that $$ \Gamma = I - 2\frac{(x_1-x_2)(x_1-x_2)^T}{||x_1-x_2||^2}, $$ where $I$ is the identity matrix, is the orthogonal matrix you seek.

(In the case $x_1 = x_2$ we can just take $\Gamma = I$.)

The hypothesis $||x_1|| = ||x_2||$ is crucial otherwise this reflection does not swap $x_1$ and $x_2$. You should think hard geometrically about how $x_1$ gets mapped to $x_2$ given that $\Gamma$ is a reflection in the way that I described. To see how it happens algebraically, first note that $$ -2x_2^Tx_1 = ||x_1-x_2||^2 - ||x_1||^2 - ||x_2||^2. $$ Then $$ 2\frac{(x_1-x_2)^Tx_1}{||x_1-x_2||^2} = \frac{2||x_1||^2 + ||x_1-x_2||^2 - ||x_1||^2 - ||x_2||^2}{||x_1-x_2||^2} = 1 $$ using the hypothesis $||x_1|| = ||x_2||$ in the last equality. Now we see that $$ \Gamma x_1 = x_1 - (x_1 - x_2) = x_2. $$ Because $\Gamma$ is symmetric in $x_1, x_2$ this also proves that $\Gamma x_2 = x_1$.

0
On

If either vector is the zero vector, then they both are (why?) and this is trivial (how?). Now, assuming that $x_1, x_2 \in \mathbb{R}^p$ satisfy $\|x_1\| = \|x_2\| \neq 0$, consider their normalized versions $$ y_1 = \frac{x_1}{\|x_1\|} \quad\text{and}\quad y_2 = \frac{x_2}{\|x_2\|}. $$

Using the Gram–Schmidt algorithm$^\dagger\!$, we can complete $y_1$ to an orthonormal basis $\mathcal{B}_1 = \{ y_1, u_2, \dots u_p \}$, and similarly we can complete $y_2$ to an orthonormal basis $\mathcal{B}_2 = \{ y_2, v_2, \dots v_p \}$. Also, let $\mathcal{E} = \{ e_1, \dots e_p \}$ denote the standard basis of $\mathbb{R}^p$.

Now, let $P_1$ denote the matrix with vectors from $\mathcal{B}_1$ as columns, and analogously for $P_2$ with vectors from $\mathcal{B}_2$. By construction, both $P_1$ and $P_2$ are orthogonal matrices. And since $P_1 e_1 = y_1$ and $P_2 e_1 = y_2$, the matrix $$ P = P_2^{\phantom{1}} P_1^{-1} $$ satisfies $P y_1 = y_2$.

Now, we can check that $P$ does the job for the original vectors as well: $$ Px_1 = P \bigl( \|x_1\| \, y_1 \bigr) = \|x_1\| \, P \bigl( y_1 \bigr) = \|x_2\| \, y_2 = x_2. $$

Note: this same idea gives a stronger result. Namely, given any pair of orthogonal bases whose corresponding vectors have same length, there's an orthogonal (orthonormal) matrix that simultaneously maps each vector in the first basis to the corresponding vector in the second basis.


$^\dagger$Generically, this clearly works by just forming the basis $\{ y_1, e_2, \dots, e_p \}$ then applying G–S. I leave it to you to check the edge cases where $y_1 \in \operatorname{span} \{ e_1, \dots, e_i \}$ for some $1 \leq i < p$.

0
On

Consider the plane defined by $x_1$ and $x_2$. You can rotate $x_1$ in this plane until it is aligned with $x_2$. A rotation is an orthogonal transformation.