I'm reading Trefethen & Bau's Numerical Linear Algebra (SIAM, 1997). In chapter 30, they discuss the divide-and-conquer eigenvalue algorithm for computing "eigenvectors as well as eigenvalues" for a symmetrical tridiagonal matrix $T$, as they state explicitly on page 229. Wikipedia has an equivalent summary.
The idea is this: split $T$ into two new symmetrical tridiagonal matrices $T_1$ and $T_2$ of about half the size. The tridiagonal element $\beta$ that isn't in either, you take out of $T$ in a $2\times 2$ square around the meeting point of $T_1$ and $T_2$ as a rank-one term $B$. Recursively decompose the resulting altered blocks $\hat T_1 = Q_1D_1Q_1^T$ and $\hat T_2 = Q_2D_2Q_2^T$, and when you're done, add $B$ and correct your diagonalisation. As far as I'm aware, this should look like: $$\begin{align} T &= \begin{bmatrix}Q_1 & 0\\0 & Q_2\end{bmatrix}\begin{bmatrix}D_1 & 0\\0 & D_2\end{bmatrix}\begin{bmatrix}Q_1^T & 0\\0 & Q_2^T\end{bmatrix} + B \\ &= \begin{bmatrix}Q_1 & 0\\0 & Q_2\end{bmatrix}\left(\begin{bmatrix}D_1 & 0\\0 & D_2\end{bmatrix} + \beta \vec z\vec z^T\right)\begin{bmatrix}Q_1^T & 0\\0 & Q_2^T\end{bmatrix}\\ &= \begin{bmatrix}Q_1 & 0\\0 & Q_2\end{bmatrix}Q_*D_*Q_*^T\begin{bmatrix}Q_1^T & 0\\0 & Q_2^T\end{bmatrix}\end{align}$$ with $\vec z$ the relevant expression. Here's the problem: both Trefethen & Bau and Wikipedia only discuss how to get $D_*$; you do this via Newton's method applied to $$f(\lambda) = 1 + \sum^m_{i=1} \frac{w_i^2}{d_i - \lambda}$$ where $\vec w = \sqrt{|\beta|}\vec z$ and $D = \begin{bmatrix}D_1 & 0\\0 & D_2\end{bmatrix}$, making the middle factor we want to diagonalise $D + \mathrm{sign}(\beta)\vec w \vec w^T$.
In fact, both of the above sources mention explicitly that the eigenvectors of $D$ and $D + \mathrm{sign}(\beta)\vec w \vec w^T$ cannot be the same in proving this expression. So ... where did the eigenvectors go? Since the eigenvectors of $D$ and $D + \mathrm{sign}(\beta)\vec w \vec w^T$ are not the same, that definitely means $Q_*$ can't just be trivially $I$. It might be true that we can find all eigenvalues recursively this way, but that's not what we set out to do!
The eigenvector computation for the divide-and-conquer eigensolver are actually a somewhat subtle matter. Section 5.3.3 of Applied Numerical Linear Algebra by Demmel has a great treatment. I'll focus on the case where $\beta > 0$, with the other case being similar.
The eigenvector of $D+\vec{w}\vec{w}^\top$ associated with the eigenvalue $\lambda$ is $(D-\lambda I)^{-1}\vec{w}$. Using this formula, one can compute all the eigenvectors of $D+\vec{w}\vec{w}^\top$ in $\mathcal{O}(n^2)$ operations—easy peasy!
Unfortunately, when implemented directly, this formula has terrible accuracy issues. Instead, one should compute a new vector $\vec{z}$ such that the computed eigenvalues, with their numerical errors, are exactly the eigenvalues of $D+\vec{z}\vec{z}^\top$. $\vec{z}$'s entries are given by Löwner's formula
$$ |\vec{z}_i| = \left[ \frac{\prod_{j=1}^n (\lambda_j - D_{ii})}{\prod_{\substack{j=1\\j\ne i}}^n (D_{jj} - D_{ii})} \right]^{1/2}. $$
We then apply the formula $(D-\lambda I)^{-1}\vec{z}$ to get accurate eigenvectors for $D+\vec{w}\vec{w}^\top$. After ensuring all columns have unit norm, this gives us $Q_*$ which we multiply into $\begin{bmatrix} Q_1 & 0 \\ 0 & Q_2 \end{bmatrix}$ to get the eigenmatrix for $T$.
The total cost for computation of all the eigenvectors is $\mathcal{O}(n^3)$, with the cost dominated by the final matrix-matrix multiplication to compute the eigenmatrix. The brave of heart can improve this to $\mathcal{O}(n^2)$ with the fast multipole method, but this requires much pain and suffering. (It took 20 years after the proposal to accelerate the divide-and-conquer eigensolver with the fast multipole method for a brave soul to actually complete the task, as far as I know.)