From Alan J Laub, Matrix Analysis For Scientists and Engineers, 2004, p 79
Theorem 9.15. Let $A \in \mathbb{C}^{n\times n} $ have distinct eigenvalues $\lambda_1, \ldots, \lambda_n$ and let the corresponding right eigenvectors form a matrix $X = [x_1, \ldots ,x_n]$. Similarly, let $Y = [y_1, \ldots ,y_n]$ be the matrix of corresponding left eigenvectors. Furthermore, suppose the left and right eigenvectors have been normalized so that $y_i^Hx_i=1, i \in \{1,\ldots,n\}$. Finally, let $\Lambda = \text{diag}(\lambda_1,\ldots,\lambda_n) \in \mathbb{R}^{n\times n}$ ...
The theorem goes on, but I am wondering about the line where he lets $\Lambda = \text{diag}(\lambda_1,\ldots,\lambda_n) \in \mathbb{R}^{n\times n}$. In general, matrices over the complex numbers with distinct eigenvalues need not have real eigenvalues. How then can we assume that $\Lambda \in \mathbb{R}^{n\times n}$?
In a theorem, we can impose conditions.
We can use the theorem when the condition holds. The conclusion need or need not hold when the assumption is not true.