An example on my book that asks for the basis of an eigenspace

131 Views Asked by At

Let $$ A = \begin{bmatrix}4&-1&6\\2&1&6\\2&-1&7\end{bmatrix}$$ An eigenvalue of A is 2. Find a basis for the corresponding eigenspace.

Solution: Form

$$A-2I = \begin{bmatrix}4&-1&6\\2&1&6\\2&-1&7\end{bmatrix}-\begin{bmatrix}2&0&0\\0&2&0\\0&0&2\end{bmatrix}=\begin{bmatrix}2&-1&6\\2&-1&6\\2&-1&6\end{bmatrix}$$

and row-reduce the augmented matrix for $(A-2I)\mathbf{x}=\mathbf{0}$.

At this point, it is clear that 2 is indeed an eigenvalue of A becsuse the equation $(A-2I)\mathbf{x}=\mathbf{0}$ has free variables. The general solution is

$$\begin{bmatrix}x_1\\x_2\\x-3\end{bmatrix}=x_2\begin{bmatrix}1/2\\1\\0\end{bmatrix}+x_3\begin{bmatrix}-3\\0\\1\end{bmatrix}$$ $x_2$ and $x_3$ free

The eigenspace, shown in fig. 3, is a two-dimensional subspace of $\mathbb{R}^3$. A basis is

$$\left\{\begin{bmatrix}1\\2\\0\end{bmatrix}\cdot\begin{bmatrix}-3\\0\\1\end{bmatrix}\right\}$$

Why the author multiplied $1$ and $6$ instead of $2, 1$ and $6$ with $-1/2$ ? I couldn't understand why?

1

There are 1 best solutions below

5
On BEST ANSWER

You seem to think that given that $$ \begin{bmatrix} 2 & -1 & 6 & 0 \\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix} $$

you can ``read off'' the entries in the first place of the eigenvectors from the matrix. It does not quite work that way. I cannot speak persuasively on why this is wrong, but I can demonstrate the right method.

From the first row of the matrix

$$ \begin{bmatrix} 2 & -1 & 6 & 0 \\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix} $$

you read off the equation

$$2x_1 - x_2 + 6 x_3 = 0.$$

[I am assuming that this matrix-to-equation conversion is clear. If not, please ask more.]

There are two "free" variables (since any one of them, say $x_1$, is determined by the rest). Then solving, of course, $$\begin{align} 2 x_1 & = x_2 - 6x_3 \\ x_1 & = \frac{1}{2} x_2 - 3 x_3. \end{align} $$

Then rewriting the column vector $\mathbf{x}$ in terms of $x_2$ and $x_3$, $$ \begin{align}\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 1/2 \cdot x_2 - 3 x_3 \\ x_2 \\ x_3 \end{bmatrix} & = \begin{bmatrix} 1/2 \cdot x_2 \\ x_2 \\ 0 \end{bmatrix} + \begin{bmatrix} -3 x_3 \\ 0 \\ x_3 \end{bmatrix}\\ & = x_2\begin{bmatrix} 1/2 \\ 1 \\ 0 \end{bmatrix} + x_3 \begin{bmatrix} -3 \\ 0 \\ 1 \end{bmatrix} \end{align}$$

Essentially, the above asserts, "$x_2$ and $x_3$ are a set of free variables for the eigenspace, and a basis for this eigenspace is $$ \left\lbrace \begin{bmatrix} 1/2 \\ 1 \\ 0 \end{bmatrix} , \, \, \begin{bmatrix} -3 \\ 0 \\ 1 \end{bmatrix} \right\rbrace." $$

The point of this decomposition is that we want to see the contribution of $x_2$ and $x_3$ separately, assuming they have different effects. Indeed, if you set $x_2 = 1$, $x_3 = 0$, you get the first vector, and if you set $x_2 = 0$, $x_3 = 1$, you get the second vector. Since everything scales linearly, you're done.

To get to the final answer, multiplying the first basis vector by $2$ doesn't change the basis-ness, and allows the author/editor to avoid typing more fractions.