Understanding why this linear operator is diagonalizable.

72 Views Asked by At

let me show you the context of my question:

Let $T:V \to V$ a linear operator, where $V$ is a finite dimensional vector space and $m_t(x)=(x - \lambda_1)^{m_1} \cdots(x-\lambda_k)^{m_k}$. In the proof of Primary decomposition Theorem we take $q_j=\displaystyle\prod_{{i=1}}^{k}(x- \lambda_i)^{m_i}$, with $j \neq i$ then since $gdc(q_1,q_2,\dots,q_k)=1$ there exists $f_1,f_2,\dots,f_k$ s.t $f_1q_1+f_2q_2+\dots +f_kq_k = 1$.

So, if we call $g_j= f_jq_j$ we have that the operator $P_j(T) = g_j(T)$ is a projection s.t

  1. $P_jP_i = 0$ if $j \neq i$
  2. $I = P_1+\cdots+P_k$
  3. $Im Pj = \ker (T - \lambda_jId)^{m_j}= W_j$.

Using that we prove that $V = W_1 \oplus\cdots \oplus W_k$.

Ok, I have a simple question, in the condiction above, why $D = \lambda_1P_1 + \dots+ \lambda_kP_K$ is diagonalizable??

I know that for each $v \in V$, by 2. we have $v = P_1(v)+ \cdots + P_k(v)$ and by 1. we have $$D(v) = D(P_1(v)+ \cdots+ P_k(v))= (\lambda_1P_1(v)+ \cdots+ \lambda_kP_k(v))(P_1(v)+ \cdots+ P_k(v)) = \lambda_1P_1(v)+ \cdots+\lambda_kP_k(v).$$

I have no Idea what can I do with that, so can you help me?

2

There are 2 best solutions below

0
On BEST ANSWER

First of all, we have $D = \lambda_1P_1 + \dots+ \lambda_kP_k$, and multiply both sides by $I = P_1+\cdots+P_k$ gives $$ DI = D = \lambda_1P_1^2 + \dots+ \lambda_kP_k^2 $$ as $P_iP_j = 0$ if $i \neq j$. This shows $P_i = P_i^2$ for all $i$. Furthermore, multiply both sides by $P_i$ gives $$DP_i = \lambda_iP_i^2 = \lambda_iP_i$$ This says that every vectors that is in the image of $P_i$ is in $\ker(D - \lambda_i)$. Since $P_i \neq O$, the kernel is nonempty, meaning that $D$ does have an eigenvalue $\lambda_i$ for each $i$. In fact, $\{\lambda_i\}_{i=1}^k$ are all of the eigenvaules of $D$, as $$ D - cI = (c-\lambda_1) P_1 + \cdots + (c-\lambda_k) P_k $$ and if $(D - cI)x = 0$ for some nonzero $x$, then $(c-\lambda_i)E_ix = 0$ for all $i$, and since $P_jx$ must be nonzero for some $j$ (if it werent, then this means $0 \neq x = Ix = \sum_i P_ix = 0$, which is nonsense), we must set $(c - \lambda_j) = 0$, meaning $\{\lambda_i\}_{i=1}^k$ are all of the eigenvaules of $D$.

It then follows from that for every $x$ we have $$ x = Ix = P_1x+\cdots+P_kx $$ and since $P_ix$ are eigenvectors for any $i$, every vector can be written as a linear combination of eigenvectors, meaning $D$ is diagonizable.

0
On

Since $P_j$ is a projection (so $P_j^2=P_j$) with image$~W_j$, it acts as the scalar $1$ (i.e., as identity) on vectors in $W_j$, and by condition 1 it acts as the scalar$~0$ on any vector in $W_i$ with $i\neq j$. Then the linear operator $D=\lambda_1P_1+\cdots+\lambda_kP_k$ acts as the scalar$~\lambda_i$ on vectors in $W_i$, in other words such (nonzero) vectors are eigenvectors of$~D$ for the eigenvalue $\lambda_i$. Since the (direct) sum of the subspaces $W_i$ equals the whole space, this makes $D$ diagonalisable, and you can find a basis of eigenvectors by combining bases of the subspaces$~W_i$.