How to prove that a $3\times 3$ matrix has only $2$ eigenvectors?

159 Views Asked by At

I am working through a problem in Riley, Hobson and Bence (Mathematical Methods for Physics and Engineering) that revolves around the following matrix:

$$ A= \begin{pmatrix} 2 & 0 & 0 \\ -6 & 4 & 4 \\ 3 & -1 & 0 \\ \end{pmatrix} $$

I first have to show that the eigenvalues are degenerate (all three eigenvalues are 2) and that any eigenvector takes the form:

$$ \vec{x}= \begin{pmatrix} u\\ 3u-2v\\ v\\ \end{pmatrix} $$

Proving these two statements are easy. The interesting part of the question asks to prove the following statement:

If two pairs of values, $u_1, v_1$ and $u_2, v_2$, define two independent eigenvectors $\vec{x_1}$ and $\vec{x_2}$ , then any third similarly defined eigenvector $\vec{x_3}$ can be written as a linear combination of $\vec{x_1}$ and $\vec{x_2}$, i.e.

$$\vec{x_3}=a\vec{x_1}+b\vec{x_2}$$

Where:

$$a=\frac{u_3v_2-u_2v_3}{u_1v_2-u_2v_1} \ \ \ \ \ \ b=\frac{u_1v_3-u_3v_1}{u_1v_2-u_2v_1}$$

I've been struggling with this for a while but I don't know where to start. Any hints would be much appreciated.

2

There are 2 best solutions below

0
On BEST ANSWER

The null space of the matrix $A - 2I$ is the eigenspace for your matrix. The only matrix for which three linearly independent solutions exist for the matrix equation $(A - 2I)\mathbf{x} = \mathbf{0}$ is when $A - 2I = \mathbf{0_{(3,3)}}$ where $\mathbf{0_{(3,3)}}$ is the zero $3 \times 3$ matrix. Proving this is quite trivial. Given your matrix $A$, it is clear that $A - 2I \neq \mathbf{0_{(3,3)}}$, and as such we can conclude that $A$ has at most $2$ linearly independent eigenvectors, a fact which is given in the beginning of the problem.

0
On

From the second statement, you know that the eigenspace is two dimensional, so any pair of linearly-independent eigenvectors form a basis for it. That’s really the content of the boxed text.

Expand the equation $\vec x_3=a\vec x_1+b\vec x_2$ in coordinates: $$au_1+bu_2 = u_3 \\ av_1+bv_2 = v_3.$$ Now solve for $a$ and $b$ using Cramer’s rule: $$a = {\begin{vmatrix}u_3&u_2\\v_3&v_2\end{vmatrix} \over \begin{vmatrix} u_1&u_2\\v_1&v_2 \end{vmatrix}} = {u_3v_2-u_2v_3\over u_1v_2-u_2v_1} \\ b = {\begin{vmatrix}u_1&u_3\\v_1&v_3\end{vmatrix} \over \begin{vmatrix} u_1&u_2\\v_1&v_2 \end{vmatrix}} = {u_1v_3-u_3v_1\over u_1v_2-u_2v_1}.$$ We know that the denominators are nonzero—the two equations are independent—because $\vec x_1$ and $\vec x_2$ are linearly independent.