Finding a specific $\mathbb{R}^3$ basis from $3$ vectors following a "simple" equation

56 Views Asked by At

After 3 weeks of trying to solve this equation, I come here for help. I have never seen this equation before, and even though it feels simple and straightforward, it doesn't seem to have any intuitive solution.

Here it goes. Let there be $3$ unitary vectors in $\mathbb{R}^3$, namely $\vec{V_1}$, $\vec{V_2}$ and $\vec{V_3}$, without any more priors.

Find all unitary vectors $\vec{X}$ of $\mathbb{R}^3$ verifying the following equation : $$ \sum\limits_{i=1}^3 (\vec{V_i} \times \vec{X}) (\vec{V_i}.\vec{X}) = \vec{0}$$

I talk about a basis in the title as I have very good hints that the solution vectors form an orthonormal basis of $\mathbb{R}^3$. Also, one can show that if $\vec{X_1}$ and $\vec{X_2}$ verify that equation, then $\vec{X_1}\times \vec{X_2}$ also does.

If anyone here have seen that equation somewhere, or has a clue about the solution, I'd be forever thankful ! Thanks !

Edit : I realized that this formulation is equivalent to finding the eigenvectors of the following matrix :

$$ M = \begin{pmatrix} 1 & v_{12} & v_{13} \\ v_{12} & 1 & v_{23} \\ v_{13} & v_{23} & 1 \\ \end{pmatrix} $$ With $v_{ij} = \vec{V_i}.\vec{V_j}$. So the eigenvectors form indeed an orthogonal basis, yet I couldn't find a simple analytic solution. The third degree polynomial one has to solve doesn't seem to have any obvious root. Can anyone confirm that, or have a decent analytical form for those vectors ?

Thanks in advance !

2

There are 2 best solutions below

1
On

If $$\mathbf{V} = \left [ \begin{matrix} \vec{V}_1 & \vec{V}_2 & \vec{V}_3 \end{matrix} \right ]$$ is an unitary matrix, then $\mathbf{V}$ is an $\mathbb{R}^3$ basis. Then, we can define $$\vec{X} = x_1 \vec{V}_1 + x_2 \vec{V}_2 + x_3 \vec{V}_3$$ and the equation $$\left(\vec{V}_1\times\vec{X}\right)\left(\vec{V}_1\cdot\vec{X}\right) + \left(\vec{V}_2\times\vec{X}\right)\left(\vec{V}_2\cdot\vec{X}\right) + \left(\vec{V}_3\times\vec{X}\right)\left(\vec{V}_3\cdot\vec{X}\right) = \vec{0}$$ becomes $$\begin{align} \vec{V}_1 \times \left( x_1 \vec{V}_1 + x_2 \vec{V}_2 + x_3 \vec{V}_3 \right)\left( \vec{V}_1 \cdot \left( x_1 \vec{V}_1 + x_2 \vec{V}_2 + x_3 \vec{V}_3 \right)\right) &~+ \\ \vec{V}_2 \times \left( x_1 \vec{V}_1 + x_2 \vec{V}_2 + x_3 \vec{V}_3 \right)\left( \vec{V}_2 \cdot \left( x_1 \vec{V}_1 + x_2 \vec{V}_2 + x_3 \vec{V}_3 \right)\right) &~+ \\ \vec{V}_3 \times \left( x_1 \vec{V}_1 + x_2 \vec{V}_2 + x_3 \vec{V}_3 \right)\left( \vec{V}_3 \cdot \left( x_1 \vec{V}_1 + x_2 \vec{V}_2 + x_3 \vec{V}_3 \right)\right) & = \vec{0} \\ \end{align}$$ which simplifies to $$\begin{align} \vec{V}_1 \times \left( x_1 \vec{V}_1 + x_2 \vec{V}_2 + x_3 \vec{V}_3 \right) x_1 \left\lVert \vec{V}_1 \right\rVert^2 &~+ \\ \vec{V}_2 \times \left( x_1 \vec{V}_1 + x_2 \vec{V}_2 + x_3 \vec{V}_3 \right) x_2 \left\lVert \vec{V}_2 \right\rVert^2 &~+ \\ \vec{V}_3 \times \left( x_1 \vec{V}_1 + x_2 \vec{V}_2 + x_3 \vec{V}_3 \right) x_3 \left\lVert \vec{V}_3 \right\rVert^2 & = \vec{0} \\ \end{align}$$ and to $$\begin{align} \left( \vec{V}_1 \times \vec{V}_2 \right ) x_1 x_2 \left( \left\lVert\vec{V}_1\right\rVert^2 - \left\lVert\vec{V}_2\right\rVert^2 \right ) & ~+ \\ \left( \vec{V}_1 \times \vec{V}_3 \right ) x_1 x_3 \left( \left\lVert\vec{V}_1\right\rVert^2 - \left\lVert\vec{V}_3\right\rVert^2 \right ) & ~+ \\ \left( \vec{V}_2 \times \vec{V}_3 \right ) x_2 x_3 \left( \left\lVert\vec{V}_2\right\rVert^2 - \left\lVert\vec{V}_3\right\rVert^2 \right ) & = \vec{0} \\ \end{align}$$ and then to $$\begin{align} \vec{V}_3 \frac{\left\lVert\vec{V}_1\right\rVert \left\lVert\vec{V}_2\right\rVert}{\left\lVert\vec{V}_3\right\rVert} x_1 x_2 \left( \left\lVert\vec{V}_1\right\rVert^2 - \left\lVert\vec{V}_2\right\rVert^2 \right ) & ~+ \\ \vec{V}_2 \frac{\left\lVert\vec{V}_1\right\rVert \left\lVert\vec{V}_3\right\rVert}{\left\lVert\vec{V}_2\right\rVert} x_1 x_3 \left( \left\lVert\vec{V}_3\right\rVert^2 - \left\lVert\vec{V}_1\right\rVert^2 \right ) & ~+ \\ \vec{V}_1 \frac{\left\lVert\vec{V}_2\right\rVert \left\lVert\vec{V}_3\right\rVert}{\left\lVert\vec{V}_1\right\rVert} x_2 x_3 \left( \left\lVert\vec{V}_2\right\rVert^2 - \left\lVert\vec{V}_3\right\rVert^2 \right ) & = \vec{0} \\ \end{align}$$ Because the basis vectors are orthogonal, the coefficient for each basis vector must vanish separately; $\vec{X} = \vec{0}$ if and only if $x_1 = x_2 = x_3 = 0$. If $\left\lVert\vec{V}_i\right\rVert \gt 0$, then the above sum is equivalent to $$\begin{cases} x_1 x_2 \left( \left\lVert\vec{V}_1\right\rVert^2 - \left\lVert\vec{V}_2\right\rVert^2 \right ) = 0 \\ x_1 x_3 \left( \left\lVert\vec{V}_3\right\rVert^2 - \left\lVert\vec{V}_1\right\rVert^2 \right ) = 0 \\ x_2 x_3 \left( \left\lVert\vec{V}_2\right\rVert^2 - \left\lVert\vec{V}_3\right\rVert^2 \right ) = 0 \\ \end{cases}$$ Besides $x_1 = x_2 = x_3 = 0$, this has three types of solutions: $$\begin{array}{ll} \text{1.} & x_1 \ne 0, & x_2 = 0, & x_3 = 0 \\ \text{2.} & x_1 = 0, & x_2 \ne 0, & x_3 = 0 \\ \text{3.} & x_1 = 0, & x_2 = 0, & x_3 \ne 0 \\ \end{array}$$ In other words, all vectors $\vec{X}$ parallel to one of the vectors $\vec{V}_i$ fulfill the equation.

A more general approach is to expand the original equation in terms of $\mathbf{V}$ and $\vec{x} = \left[ \begin{matrix} x_1 \\ x_2 \\ x_3 \end{matrix} \right ]$.

0
On

Let $\vec{V}_1$, $\vec{V}_2$, and $\vec{V}_3$ be arbitrary non-parallel vectors, i.e. $$\left\lbrace \begin{array}{c} 0 \lt \left\lvert \vec{V}_1 \cdot \vec{V}_2 \right\rvert \lt \left\lVert \vec{V}_1\right\rVert \left\lVert \vec{V}_2\right\rVert \\ 0 \lt \left\lvert \vec{V}_1 \cdot \vec{V}_3 \right\rvert \lt \left\lVert \vec{V}_1\right\rVert \left\lVert \vec{V}_3\right\rVert \\ 0 \lt \left\lvert \vec{V}_2 \cdot \vec{V}_3 \right\rvert \lt \left\lVert \vec{V}_2\right\rVert \left\lVert \vec{V}_3\right\rVert \\ \end{array} \right. \tag{1}\label{EQ1}$$ With these, you can use the Gram-Schmidt process to form an orthonormal basis $\hat{e}_1$, $\hat{e}_2$, and $\hat{e}_3$ such that $$\left\lbrace\begin{aligned} \vec{V}_1 &= \lambda_{1} \hat{e}_1 \\ \vec{V}_2 &= \lambda_{21} \hat{e}_1 + \lambda_{22} \hat{e}_2 \\ \vec{V}_3 &= \lambda_{31} \hat{e}_1 + \lambda_{32} \hat{e}_2 + \lambda_{33} \hat{e}_3 \\ \end{aligned}\right.\tag{2}\label{EQ2}$$ If by "unitary", you mean the vectors are of unit length, then also $$\left\lbrace\begin{aligned} \lambda_{1} &= 1 \\ \lambda_{21}^2 + \lambda_{22}^2 &= 1 \\ \lambda_{31}^2 + \lambda_{32}^2 + \lambda_{33}^2 &= 1 \\ \end{aligned}\right.$$ If we use $$\vec{X} = \left [ \begin{matrix} x_1 \\ x_2 \\ x_3 \end{matrix} \right ] \tag{3}\label{EQ3}$$ and $\vec{X}$ is also of unit length, then $x_1^2 + x_2^2 + x_3^2 = 1$.

The problem is to find all $\vec{X}$ that fullfill $$\sum_{i=1}^3 \left(\vec{V}_i \times \vec{X}\right)\left(\vec{V}_i \cdot \vec{X}\right) = \vec{0} \tag{4}\label{EQ4}$$ Applying $\eqref{EQ2}$ and $\eqref{EQ3}$ to $\eqref{EQ4}$, noting that the vector is only zero if each of its components is zero, we get the set of three equations $$\left\lbrace\begin{aligned} - (\lambda_{31} \lambda_{33}) x_1 x_2 & + ~ \\ (\lambda_{21} \lambda_{22} + \lambda_{31} \lambda_{32} ) x_1 x_3 & + ~\\ - (\lambda_{32} \lambda_{33}) x_2^2 & + ~ \\ (\lambda_{22}^2 + \lambda_{32}^2 - \lambda_{33}^2) x_2 x_3 & + ~ \\ (\lambda_{32} \lambda_{33}) x_3^2 & = 0 \\ (\lambda_{31} \lambda_{33}) x_1^2 & + ~ \\ (\lambda_{32} \lambda_{33}) x_1 x_2 & + ~ \\ (\lambda_{33}^2 - \lambda_{1}^2 - \lambda_{21}^2 - \lambda_{31}^2) x_1 x_3 & + ~ \\ - (\lambda_{21} \lambda_{22} + \lambda_{31} \lambda_{32}) x_2 x_3 & + ~ \\ - \lambda_{31} \lambda_{33} x_3^2 & = 0 \\ - (\lambda_{21} \lambda_{22} + \lambda_{31} \lambda_{32}) x_1^2 & + ~ \\ (\lambda_{1}^2 - \lambda_{22}^2 + \lambda_{21}^2 + \lambda_{31}^2 - \lambda_{32}^2) x_1 x_2 & + ~ \\ - (\lambda_{32} \lambda_{33}) x_1 x_3 & + ~ \\ (\lambda_{21} \lambda_{22} + \lambda_{31} \lambda_{32}) x_2^2 & + ~ \\ (\lambda_{31} \lambda_{33}) x_2 x_3 & = 0 \\ \end{aligned}\right.\tag{5}\label{EQ5}$$ As a set of equations $\eqref{EQ5}$ does not have any interesting solutions (just complicated expressions for $x_2$ and $x_3$ with a free $x_1$, for example), but if indeed "unitary" is used here to mean "of unit length", then many of the terms simplify a lot.

Also note that for example the first of the three equations can also be written as $$\left[\begin{matrix} x_1 & x_2 & x_3 \end{matrix}\right] \cdot \left[\begin{matrix} 0 & -\lambda_{31}\lambda_{33} & \lambda_{21}\lambda_{22} + \lambda_{31}\lambda_{32} \\ 0 & -\lambda_{32}\lambda_{33} & \lambda_{22}^2 + \lambda_{32}^2 - \lambda_{33}^2 \\ 0 & 0 & \lambda_{32}\lambda_{33} \\ \end{matrix}\right] \cdot \left[\begin{matrix} x_1 \\ x_2 \\ x_3 \end{matrix} \right] = 0$$ as can the two others as well; you essentially get three matrices that all yield zero when pre- and post-multiplied by the set of vectors fulfilling the specified conditions $\eqref{EQ4}$. This may or may not help, depending on what OP means by "unitary".