Spectral theorem for diagonalizable matrices and associated spectral projectors

32 Views Asked by At

I was going through my lecture notes and was wondering about the connection between the diagonalized form of a matrix and it's spectral projectors:

Let $ A \in M_{n\times n}(F) $ with spectrum $\sigma(A) = \{\lambda _1,...,\lambda_k\}. A$ is diagonalizable if and only if there exist matrices $\{G_1,...,G_k\}$ such that

$$ A = \lambda _1 G_1 + ... + \lambda _kG_k,$$ where

$G_i$ is the projector onto $N(A-\lambda _i Id)$ along $R(A-\lambda _i Id)$

$G_i G_j = 0$ whenever $i \ne j$.

$G_1 + ... + G_k = Id.$

I know that if i can diagonalize a matrix A we can write:

$A=PDP^{-1} = A=\lambda _1 PD_1P^{-1} + ... + ...\lambda _n PD_nP^{-1}$

where $D_i$ are zero matrices with a $1$ in position $i,i$.

Therefore $PD_iP^{-1}$ should be $G_i$ which is also true.

Let $U$ and $W$ be subspaces of $V$ The oblique projection matrix onto $U$ along $W$ is given by $A(B^TA)^{-1}B^T$, where $A$ has the basis basis vectors of $U$ concatenated and $B$ the basis vectors of the orthogonal complement of $W$ concatenated.

Now my question is: why are $PD_iP^{-1}$ and the projector onto $N(A-\lambda _i Id)$ along $R(A-\lambda _i Id)$ in this case equal? I get the idea that I transform my basis vectors to the eigenvectors with the $P$ matrix but why does the projection depend on all eigenvectors? Furthermore why do the projectors necessarily $G_i G_j = 0$ whenever $i \ne j$?

1

There are 1 best solutions below

0
On

Since you have a basis of eigenvectors you have a have a unique decomposition of vectors on the distinct eigenspaces. That is, for all $v \in V$ there is a unique decomposition

$$ v = \sum_{\lambda \in \sigma(A)} v_\lambda $$

where $v_\lambda \in V_{(\lambda)} := N(A - \lambda I)$. This defines the projections $$ G_\lambda : V \to V_{(\lambda)} \subset V : v \mapsto v_\lambda.$$

Then we clearly have

$$ \sum_\lambda G_\lambda = I.$$

Indeed just evaluate in a vector $v$ to get $$ \sum_\lambda G_\lambda( v) = \sum_{\lambda} v_\lambda = v = I(v).$$

We also have that

$$G_{\lambda_i} \circ G_{\lambda_j} = 0$$ whenever $\lambda_i \neq \lambda_j$

Indeed $$ G_{\lambda_i} \circ G_{\lambda_j} (v) = G_{\lambda_i} (0 + ... + 0 + v_{\lambda_j} + 0 + ... + 0) = 0.$$


How to find the appropriate matrix for $G_\lambda ?$.

You know that the linear map $G_\lambda$ is the identity of $V_{(\lambda)}$ and $0$ everywhere else. It follows that you know the image of $G_\lambda$ on a basis of eigenvectors $\{e_1,..., e_n\}$. Indeed we have

$$ G_\lambda(e_i) = \begin{cases} e_i & \text{ if } Ae_i = \lambda e_i \\ 0 & else. \end{cases}$$

It follows that the matrix for $G_\lambda$ in the eigenbasis is the diagonal matrix $D_\lambda$ whose $i$-th element is $1$ if and only if $e_i$ is an eigenvector with eigenvalue $\lambda$. In the canonical basis the matrix is therefore
$$ G_\lambda = PD_\lambda P^{-1}.$$