Let $A$ be a square matrix over $\mathbb C$, and let $A^T$ denote its transpose.
It is not hard to see that $A$ and $A^T$ have the same set of eigenvalues, so given $Ax=\lambda x$ for some vector $x\in V$ and eigenvalue $\lambda\in\mathbb C$, we know that there must always be some other $y\in V$ such that also $$A^T y=\lambda y.$$
We also know that, if $Ax=\lambda x$ and $A^T y=\mu y$ with $\lambda\neq \mu$, then $\langle y^*,x\rangle=0$, where $y^*$ denotes the vector whose elements are complex conjugate of those of $y$, as follows from $$\langle y^*,Ax\rangle=\lambda \langle y^*,x\rangle=\mu\langle y^*,x\rangle.$$
The same argument, however, does not provide any information for the case $\mu=\lambda$. Is there relation holding in general for such a case?
More precisely, given $Ax=\lambda x$ and $A^T y=\lambda y$, is there any general relation between $x$ and $y$?
This is a classical result that any matrix $A$ is similar to its transpose.
Then, write $A^T=PAP^{-1}$, you have that if $A^Tx=\lambda x$, then $$A(P^{-1}x)=\lambda (P^{-1}x)$$
So that $E_\lambda(A^T)=P\cdot E_\lambda(A).$
The matrix $P$ isn't always easy to find, and indeed the "classical result" is not that easy to show. This can be done with a density argument when the field is for example $\mathbb C$. In a more general context, you can use Frobenius matrix reduction to see it.
But for an orthogonal matrix for example, it is easy to see that the eigenvectors are the same.