Composition of two orthogonal projections

1.6k Views Asked by At

Let $V$ be a finite dimensional Euclidean space and let $W_1,W_2$ be two subspaces of $V$. Let $P_1,P_2$ denote the projections onto $W_1,W_2$ respectively. Is it true that the composition $P_1\circ P_2$ is always diagonalizable?

1

There are 1 best solutions below

0
On BEST ANSWER

Orthogonal projections are Hermitian positive semi-definite (with eigenvalues $0$ and/or $1$). The product of two positive semi-definite matrices (operators) is diagonalisable (see, e.g., Corollary 7.6.2 here -- unfortunately, it is not part of the preview). Therefore, the product of two orthogonal projections is diagonalisable.

The proof is based on the fact that given two positive semi-definite matrices $A$ and $B$ such that $r=\mathrm{rank}(A)$, there is a nonsingular matrix $X$ such that $$\tag{$*$} A=X\begin{bmatrix}I_r & 0 \\ 0 & 0\end{bmatrix}X^*, \quad B=X^{-*}DX^{-1}, $$ where $D$ is diagonal (and has nonnegative diagonal entries). Then it follows that $$ AB=X\left(\underbrace{\begin{bmatrix}I_r & 0 \\ 0 & 0\end{bmatrix}D}_{\text{diagonal matrix}}\right)X^{-1}. $$


The proof of ($*$) is not hard but rather technical.

Case $A$ is definite: It is easy to show that $AB$ is diagonalisable if $A$ and $B$ are Hermitian positive semi-definite and at least one of them (say, $A$) is positive definite. To show that we can consider the Cholesky factorisation of $A$: $A=LL^*$ for some nonsingular triangular matrix $L$. Then, since $L^*BL$ is positive semi-definite, we can unitarily diagonalise it as $$ L^*BL=UDU^*, $$ where $U$ is unitary and $D$ has non-negative diagonal. Now set $X=LU$ and see that $$\tag{1} A=LL^*=LUU^*L=XX^*, \quad B=L^{-*}UDU^*L^{-1}=X^{-*}DX^{-1}. $$

Both $A$ and $B$ semi-definite: The case that both $A$ and $B$ are semi-definite is a bit more complicated but proceeds essentially in the same fashion. Assume that $r=\mathrm{rank}(A)$. There exists a nonsingular matrix $L$ (does not need to be triangular) such that $$\tag{2} A=L\begin{bmatrix}I_r&0\\0&0\end{bmatrix}L^*. $$ This decomposition can be obtained, e.g., using the eigen-decomposition of $A$ and by "absorbing" the diagonal matrix containing the eigenvalues of $A$ to the unitary factors. Note that we cannot proceed directly as in the previous case because for a unitary $U$, we do not have in general $$ U\begin{bmatrix}I_r&0\\0&0\end{bmatrix}U^*\neq \begin{bmatrix}I_r&0\\0&0\end{bmatrix}, $$ which was used to show that $A=XX^*$ before; see (1). So we must find a bit more fancy diagonalisation of $L^*BL$ which will leave the diagonal matrix in (2) untouched by multiplication from both sides.

So, consider the matrix $$ L^*BL=\left[\begin{array}{cc}B_{11}&B_{12}\\B_{21}&B_{22}\end{array}\right]\begin{array}{l}\}r\\\}n-r\end{array}. $$ We can use the fact that (since $L^*BL$ is positive semi-definite) $\mathcal{N}(B_{11})\subset\mathcal{N}(B_{12}^*)$ (which is easy to show using the fact that if $x^*Cx=0$ for a semi-definite $C$ and a vector $x$, then $Cx=0$; it is in fact a generalisation of the property of semi-definite matrices which states that a zero on the diagonal implies zeros in the corresponding row and column), which implies (due to the relation between fundamental spaces) that $\mathcal{R}(B_{12})\subset \mathcal{R}(B_{11})$, and hence there is a matrix $Z\in\mathbb{C}^{r\times(n-r)}$ such that $B_{12}=B_{11}Z$. It follows that $$ L^*BL=\begin{bmatrix}B_{11}&B_{11}Z\\Z^*B_{11}&B_{22}\end{bmatrix} =\begin{bmatrix}I&0\\Z^*&I\end{bmatrix}\begin{bmatrix}B_{11}&0\\0&B_{22}-Z^*B_{11}Z\end{bmatrix}\begin{bmatrix}I&Z\\0&I\end{bmatrix}. $$ Now both diagonal blocks on the right-hand side are semi-definite (since we have a congruent factorisation) so there are unitary matrices $U_1$ and $U_2$ and diagonal positive semi-definite matrices $D_1$ and $D_2$ such that $$ B_{11}=U_1D_1U_1^*, \quad B_{22}-Z^*B_{11}Z=U_2D_2U_2^*. $$ The transformation matrix $X$ we look for is then given by $$ X=L\begin{bmatrix}I&-Z\\0&I\end{bmatrix}\begin{bmatrix}U_1&0\\0&U_2\end{bmatrix}. $$ With $D=\mathrm{diag}(D_1,D_2)$, it is easy to see that $B=X^{-*}DX^{-1}$. Indeed, $$ \begin{split} B &= L^{-*} \begin{bmatrix} I & 0 \\ Z^* & I \end{bmatrix} \begin{bmatrix} B_{11} & 0 \\ 0 & B_{22}-Z^*B_{11}Z \end{bmatrix} \begin{bmatrix} I & Z \\ 0 & I \end{bmatrix} L^{-1}\\ &= \underbrace{L^{-*} \begin{bmatrix} I & 0 \\ Z^* & I \end{bmatrix} \begin{bmatrix} U_1 & 0 \\ 0 & U_2 \end{bmatrix}}_{X^{-*}} \begin{bmatrix} D_1 & 0 \\ 0 & D_2 \end{bmatrix} \underbrace{\begin{bmatrix} U_1 & 0 \\ 0 & U_2 \end{bmatrix}^* \begin{bmatrix} I & Z \\ 0 & I \end{bmatrix} L^{-1}}_{X^{-1}}\\ &=X^{-*}DX^{-1}. \end{split} $$ Also, $$ \begin{split} X\begin{bmatrix}I_r&0\\0&0\end{bmatrix}X^* &= L\begin{bmatrix}I&-Z\\0&I\end{bmatrix}\begin{bmatrix}U_1&0\\0&U_2\end{bmatrix} \begin{bmatrix}I_r&0\\0&0\end{bmatrix} \begin{bmatrix}U_1&0\\0&U_2\end{bmatrix}\begin{bmatrix}I&0\\-Z^*&I\end{bmatrix}L^*\\ &= L \begin{bmatrix}I_r&0\\0&0\end{bmatrix} L^*\\ &=A. \end{split} $$


Note that the product of two oblique projections need not to be diagonalisable, since for $$ P_1 = \begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix} \quad\text{and}\quad P_2 = \begin{bmatrix} 0 &1 \\ 0&1 \end{bmatrix}, $$ the product $$ P_1P_2=\begin{bmatrix} 0 & 2 \\ 0 & 0 \end{bmatrix} $$ cannot be diagonalised.