Does every linear transformation from a complex vector space to itself have an eigenvector? I am unable to give anymore context for this question because this was how the question was given to me. I do not really understand linear algebra hence why I needed help proving whether this statement was true or false.
Does every linear transformation from a complex vector space to itself have an eigenvector.
3.3k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 2 best solutions below
On
Of course @msm has given the morally correct answer in the spirit of the question, but I just want to pedantically point out that as written, the answer to the question is "No". There is exactly one linear transformation from a finite-dimensional complex vector space to itself that has no eigenvalue.
Everyone is citing the Fundamental Theorem of Algebra, but that says that every non-constant polynomial has at least one complex root. Can the characteristic polynomial ever be a constant? Since the degree of the polynomial equals the dimension of the vector space, this can only happen in the unique zero-dimensonal complex vector space $V =\{0\}$. This is a perfectly good complex vector space, and it has exactly one linear map to itself, namely $f(x) = 0$.
Now, since the basis for this vector space is the empty set, it might be unclear how to represent the map as a $0 \times 0$ matrix, but that's unimportant since we can go back to our original definition of an eigenvalue: $\lambda$ is an eigenvalue for $f$ if there exists a vector $v \in V$, $v \neq 0$, such that $f(v) = \lambda v$. Since there are no vectors $v \neq 0$, then there can be no eigenvectors, and hence no eigenvalues for this linear map.
Whether worrying about edge cases like this is pointless pedantry or part of the fun of rigorous mathematics is a matter of taste left up to the reader.
Edit: I also see now that "finite-dimensional" was never specified in the question either. In that case, the Fundamental Theorem of Algebra doesn't apply and there are lots of linear maps with no eigenvalues, like $f(x_1, x_2, \dotsc) = (0, x_1, x_2, \dotsc)$.
Yep. Consider the characteristic polynomial $ch_M(x)$ of the transformation $M$. By the fundamental theorem of algebra, this has a root $\lambda$.
Since $ch_M(\lambda) = det(M - \lambda I) =0$, this transformation cannot be invertible, and so must have a vector $v \neq 0$ in the kernel, i.e.
$$ (M - \lambda I)v =0$$ which is the same as
$$ Mv = \lambda v $$
If all the roots are distinct, then this same argument can be used to show that $M$ has a full set of eigenvectors, and so is diagonalizable. On the other hand, if there is a multiple root in the characteristic polynomial, there needn't be more than one eigenvector corresponding to that eigenvalue.
Consider $$ \begin{bmatrix} 1 &0 \\ 0 & 1 \end{bmatrix}$$
This has characteristic polynomial $(1 - \lambda)^2$. The only root is $\lambda =1$, and there are two corresponding eigenvectors: the standard basis vectors.
On the other hand, consider: $$ \begin{bmatrix}1 & 1 \\ 0 & 1 \end{bmatrix}$$
Again, the characteristic polynomial is $(1-\lambda)^2$, and the only eigenvalue is $\lambda =1$. But, there is only one corresponding eigenvector.