How to show that projections are diagonalizable?

1.6k Views Asked by At

Let $E$ be a linear operator on a finite-dimensional vector space $V$ such that $E^2=E.$ Then we know that $E$ is a projection and in particular, every $v\in V$ can be uniquely written as $v=Ev+(v-Ev).$

If $\mathcal{B}_1$ are the basis for the $\text{Im}(E)$ and $\mathcal{B}_2$ for $\ker (E)$ then I know that the basis for $V$ is the set $\mathcal{B}=\mathcal{B}_1\cup \mathcal{B}_2.$ Now I want to show that the matrix $[E]_{\mathcal{B}}$ is diagonalizable. Note that a similar question has been already asked on this website, however, I am more interested in knowing whether the following will work and if not why?

So I know that if $\mathcal{B}=\{\alpha_1,\alpha_2,...,\alpha_r,\alpha_{r+1},...,\alpha_{n}\}$ where the first $r$ elements are in $\mathcal{B}_1$ and the remaining in $\mathcal{B}_2.$ The the elements of the matrix are $1\leq j\leq r$

$$E(\alpha_j)=\sum_{i=1}^{n}a_{ij}\alpha_{i}.$$ And $0$ elsewhere because if we pick $\alpha_{r+1}$ for instance then $E(\alpha_{r+1})=0$ and so $$0=\sum_{i=1}^{n}a_{ir+1}\alpha_{i}\implies \text{ all the coefficients are 0 as the vectors are linearly independent}. $$

So the goal is to somehow show that for each $1\leq j\leq r$ we have $a_{ij}=0$ for each $i\not =j.$ This is where I am facing some trouble.

Maybe we can go by contradiction. Say $a_{ij}=k\not=0$ where $i\not =j.$

Edit:

How about $$E(\alpha_j)=\sum_{i=1}^{n}a_{ij}\alpha_{i}\implies E(\alpha_{j}-\sum_{i=1}^{r}a_{ij}\alpha_{i})=0$$ Thus $\alpha_{j}-\sum_{i=1}^{r}a_{ij}\alpha_{i}=0$. Implying that $a_{ij}=\delta_{ij}.$

5

There are 5 best solutions below

2
On

Since $E^2 = E$, the minimal polynomial $m_E(x)$ of $E$ divides $x^2-x = x(x-1)$. So $m_E(x)$ is either $x$, $x-1$, or $x(x-1)$. Since it is a product of distinct monic linear factors, $E$ is diagonalizable.

0
On

Note that each $\alpha_j$ (with $1\leqslant j\leqslant r$) is in the image of $E$. So, $\alpha_j=E(\beta_j)$ and therefore$$E(\alpha_j)=E\bigl(E(\beta_j)\bigr)=E(\beta_j)=\alpha_j.$$

6
On

Note that if $p(x):=x^2-x=x(x-1)$ then $p(E)=0$ and $m_E\mid p(x)$. But $p(x)$ can be linearly factored and then $m_E$ too, that implies $E$ diagonalizable.

0
On

Actually, it's even simpler than that. The image and the kernel of $E$ (which are respectively the eigenstates associated to the eigenvalues $1$ and $0$) form à direct sum of the total space. The matrix of $E$ in a basis obtained by juxtaposition of basis of these spaces is diagonal.

0
On

A linear operator $E$ on a finite-dimensional vector space $V$ is diagonalizable iff the minimal polynomial of $E$ has no repeated roots. The minimal polynomial of $E$ must divide $m(\lambda)=\lambda(\lambda-1)$, from which the result follows.