Can an eigenvector contain free variables?

2.3k Views Asked by At

I have the matrix $$B=\begin{pmatrix}-6&4&-17\\0&-3&-4\\0&2&3\\\end{pmatrix}$$ which I found to have eigenvalues $\lambda_1=-6, \lambda_2=-1, \lambda_3=1$.

When attempting to find the eigenvectors for $\lambda_2$ and $\lambda_3$ I found the solution to contain free variables and I was unsure whether an eigenvector can contain free variables or not.

Thanks.

3

There are 3 best solutions below

1
On BEST ANSWER

Notice that if $v$ is an eigenvector, then for any non-zero number $t$, $t\cdot v$ is also an eigenvector.

If this is the free variable that you refer to, then yes.

Edit:

In general, if $v_i\neq 0$ satisfies $Av_i = \lambda v_i$, then $$A\left( \sum_{i=1}^k \alpha_i v_i\right) = \lambda \left( \sum_{i=1}^k \alpha_i v_i\right)$$

That is if $ \sum_{i=1}^k \alpha_i v_i \ne 0$, then it is an eigenvector with respect to the same eigenvalue.

0
On

Yes. In fact, you always get (at least) one free variable when you are finding eigenvectors. This is because if $\lambda$ is an eigenvalue of $B$, then by definition $$|B - \lambda I| = 0,$$ so when you do row reduction on this matrix, you will always get at least one row of zeros. As Siong pointed out, that corresponds to the fact that if $v$ is an eigenvector corresponding to $\lambda$, then $t \cdot v$ is also an eigenvector corresponding to the same eigenvalue, for any $t \neq 0$. Indeed: $$ B (t \cdot v) = t \cdot Bv = t \cdot \lambda v = \lambda (t \cdot v).$$

0
On

There seems to be some confusion regarding the terminology.

Recall that, given a matrix $A$, an eigenvector is a non-zero vector $v$ such that $Av = \lambda v$ for some number $\lambda$, the corresponding eigenvalue.

Hence, an eigenvector is, fundamentally, nothing but an ordinary vector -- it might be $(1, 4, -2)$ or $(1, 0, 0)$ or $(0, 1, -1)$. Hence, a given eigenvector doesn't "contain" anything but its coordinates, which are numbers.

However, given an eigenvalue, there are always infinitely many eigenvectors corresponding to that eigenvalue. Indeed, if $v$ is an eigenvector associated with the eigenvalue $\lambda$, then so is $t v$ for every non-zero value of $t$. This is easy to prove (and a proof is given in Wizact's answer).

In general, the set of all eigenvectors corresponding to a given eigenvalue, together with the zero vector, is called the eigenspace of that eigenvalue. This is a vector space itself. It might be one-dimensional, or it might be of any higher dimension.

For instance, consider the matrix

$$A=\begin{pmatrix}1&0&0\\0&1&0\\0&0&0\\\end{pmatrix}.$$

This corresponds to a linear transformation $\mathbf{R}^3 \to \mathbf{R}^3$ which is particularly easy to visualise: it is orthogonal projection onto the $xy$ plane. So each vector in $\mathbf{R}^3$ is mapped to its shadow on the $xy$ plane when illuminated from above or below (put informally):

Example of this projection in action.

Geometrically, it is clear that each vector already in the $xy$ plane, that is, each vector of the form $(x, y, 0)$, is mapped to itself. Hence, every such vector besides the zero vector is an eigenvector corresponding to the eigenvalue $1$. Also, no other vector is mapped to itself.

It is also clear that each vector on the $z$ axis, that is, each vector of the form $(0, 0, z)$, is mapped to the zero vector. Thus, each non-zero such vector is an eigenvector with eigenvalue $0$. Also, no other vector is mapped to the zero vector.

We therefore say that the $xy$ plane is the eigenspace corresponding to the eigenvalue $1$. This is clearly a two-dimensional vector space. A basis in this vector space is $\left\{(1, 0, 0), (0, 1, 0)\right\}$, so every vector in this eigenspace is a linear combination of these basis vectors. (More precisely, every vector in this space can be written as a linear combination of these basis vectors in a unique way.) Hence, every element in this space is of the form

$$s (1, 0, 0) + t (0, 1, 0)$$

for some coordinates $s$ and $t$. Every non-zero vector in this eigenspace is an eigenvector (for this eigenvalue).

Examples of eigenvectors: $(1, 0, 0)$, $(0, 1, 0)$, $(1, 1, 0)$, $(7, -4, 0)$.

Similarly, the $z$ axis is the eigenspace corresponding to the eigenvalue $0$. This is a one-dimensional vector space. A basis in this vector space is $\left\{ (0, 0, 1) \right\}$, so every vector in this eigenspace is a linear combination of this vector. Hence, every vector in this space is of the form

$$t (0, 0, 1)$$

for some coordinate $t$. Every non-zero vector in this eigenspace is an eigenvector (for this eigenvalue).

Examples of eigenvectors: $(0, 0, 1)$, $(0, 0, -7)$, $(0, 0, \pi)$, $(0, 0, \mathrm{arcsinh}(0.3)^\pi)$.

Hence, the expression for general vector in a given eigenspace -- typically, as a linear combination of the vectors in a basis for this eigenspace -- contains one or more variables, but each eigenvector is a pure numeric entity.