As a part of a larger proof, my text claims that if
$$A\begin{bmatrix}u_1&u_2\\ \end{bmatrix}=\begin{bmatrix}u_1&u_2\\ \end{bmatrix}\begin{bmatrix}\lambda&1\\0&\lambda\\ \end{bmatrix}$$
where $A\in \mathbb R^{n\times n}, u_1, u_2$ are complex vectors, and $\lambda \in \mathbb C \setminus \mathbb R$, then
$$\text{span}\{u_1, u_2 \}\cap \text{span}\{ \overline{u_1}, \overline{u_2} \}=\{ 0 \}$$
I don't quite understand why. So we conjugate both sides:
$$A\begin{bmatrix}\overline{u_1}&\overline{u_2}\\ \end{bmatrix}=\begin{bmatrix}\overline{u_1}&\overline{u_2}\\ \end{bmatrix}\begin{bmatrix}\overline{\lambda}&1\\0&\overline{\lambda}\\ \end{bmatrix}$$
And we know that $\lambda \neq \overline{\lambda}$.
Naively taking an element in the intersection of spans and writing it out as a combination doesn't seem to help.
There must be some simple argument I don't see.
Edit. OK, I have misread the question. Here is a new answer. I've just rushed for a quick fix and haven't spent much time on it. So, I guess the proof below is very clumsy, and more elegant proofs can be found in those textbooks that discuss real Jordan forms.
By assumption, we have \begin{cases} (A-\lambda I)u_1 = 0,\\ (A-\lambda I)u_2 = u_1,\\ (A-\bar{\lambda}I)\bar{u}_1 = 0,\\ (A-\bar{\lambda}I)\bar{u}_2 = \bar{u}_1. \end{cases} Suppose inside $\operatorname{span}\{u_1, u_2\}\cap\operatorname{span}\{\bar{u}_1, \bar{u}_2\}$ there lies a vector $$x = au_1+bu_2=c\bar{u}_1+d\bar{u}_2\tag{$\ast$}$$ where $a,b,c,d$ are some complex numbers. Apply $(A-\lambda I)(A-\bar{\lambda}I)$ on both sides (note that the two multiplicands of this matrix product commute), we get \begin{align} (A-\bar{\lambda}I)bu_1 &= (A-\lambda I)d\bar{u}_1,\\ b(\lambda -\bar{\lambda})u_1 &= d(\bar{\lambda}-\lambda )\bar{u}_1,\\ bu_1 &= -d\bar{u}_1.\tag{$\dagger$} \end{align} Now there are two possibilities:
In other words, $x$ must be zero, i.e. the two spans in question have a zero intersection.