As part of a project I am doing at work I have to code an eigensolver for symmetric 3x3 matrices. For a number of reasons, I cannot use a library for this task and have to implement the code myself. I have found this excellent paper that contains a suitable algorithm (the non-iterative one) that I can use and found it fairly tractable (my background is in Computer Science, so not an expert in linear algebra).
The part where I am getting stuck is where they present an algorithm to compute the second eigenvector $\mathbf{E}$ (corresponding to the second largest eigenvalue) on page 15. They start by creating a right-handed orthonormal basis $\{\mathbf{U},\mathbf{V},\mathbf{W}\}$ containing the already computed eigenvector $\mathbf{W}$ and state that $\mathbf{E}$ has to be a circular combination of $\mathbf{U}$ and $\mathbf{V}$. So far, so good. The next paragraph (bottom of page 15) is where they lose me. I cannot seem to understand the motivation for what they do next. It also seems to me that they are trying to multiply a 3x2 matrix with a 3x1 vector which isn't possible.
Any help in clarifying this would be greatly appreciated. I could just implement the algorithm blindly, but I really don't like the idea of doing so!
You're right; they're using $J$ inconsistently. If you define $J$ to be the $2\times3$ matrix whose rows are $\mathbf U^\top$ and $\mathbf V^\top$, and $M=J(A-\alpha_1I)J^\top$, it works out. Note that $J^\top JE=E$, so $(A-\alpha_1I)\mathbf E=\mathbf0$ can be written $(A-\alpha_1I)J^\top J\mathbf E=\mathbf0$. Then multiplying from the left by $J$ yields $M\mathbf X=\mathbf0$.