Final step in proving that the only invariant subspaces of $\mathbb{R}^2$ are $\mathbb{R}^2$ and $0$

721 Views Asked by At

There's this example in my book that goes as follows:

Let $T$ a linear operator on $\mathbb{R}^2$ represented in the standard ordered basis by the matrix $$\begin{bmatrix}0&-1\\1&0\end{bmatrix}$$ Then the only subspaces of $\mathbb{R}^2$ which are invariant under $T$ are $\mathbb{R}^2$ and $0$. For any other invariant subspace $W$ would necessarily have dimension 1. But if $W$ is the subspace spanned by some non-zero vector $\alpha$ then the statement that $W$ is invariant under $T$ means that $T\alpha=c\alpha$ for some real number $c$.

So far so good, but here comes the step I don't understand.

But this is impossible with $\alpha\neq0$ because one can easily verify that for any $c$ the operator $(T-cI)$ is invertible.

My question is, what does it matter that $T-cI$ is invertible, and why does that make things impossible? Additionally, are there alternative ways to prove such a statement?

2

There are 2 best solutions below

0
On BEST ANSWER

$Tα=cα$ implies $Tα-cα=0$, i.e. $(T-cI)α=0$. Now if $T-cI$ is invertible, we can just multiply the last equation from the left with the inverse of $T-cI$ and we get $α=0$.

0
On

There is an elementary alternative proof. Suppose $T(\alpha) = c \alpha$. Write $\alpha = (a, b)$. Then you find that $-b = ca$ and $a = cb$. The only real possibility is $a = b = 0$.