$\begin{pmatrix}A_1&A_{12}\\0&A_2\end{pmatrix}$ is similar to $\begin{pmatrix}A_1&0\\0&A_2\end{pmatrix}$

112 Views Asked by At

Let $A_1$ be an $m \times m$ matrix and $A_2$ be an $n \times n$ matrix. Let the respective characteristic polynomials $f_1(x)$ and $f_2(x)$ be relative prime, i.e., $(f_1, f_2) = 1$. Show that $$\begin{pmatrix}A_1&A_{12}\\0&A_2\end{pmatrix}$$ is similar to $$\begin{pmatrix}A_1&0\\0&A_2\end{pmatrix}$$


If $A_1,A_2$ is diagnoal in the complex field, then $A_1,B_1$ has no common root. It seems not so difficult. What about general filed?

2

There are 2 best solutions below

0
On BEST ANSWER

For brevity I will write $A_1,A_{12}$ and $A_2$ as $A,B$ and $C$ respectively. Consider the equation $$ \pmatrix{A&B\\ 0&C}\pmatrix{I_m&X\\ 0&I_n}=\pmatrix{I_m&X\\ 0&I_n}\pmatrix{A&0\\ 0&C}. $$ This is equivalent to the Sylvester equation $AX-XC=-B$. Define $L:X\mapsto AX-XC$. If we can prove that $L$ is nonsingular, then $L(X)=-B$ is solvable and we are done.

To prove that $L$ is nonsingular, we want to show that $L(X)=0$ has only the trivial solution. Suppose $L(X)=0$. Then $AX=XC,\,A^2X=XC^2$, so on and so forth. In turn, $p(A)X=Xp(C)$ for every polynomial $p$. Take $p$ as the characteristic polynomial of $C$. Then $p(C)=0$ but $p(A)$ is nonsingular because $p$ is relatively prime to the characteristic polynomial of $A$. Therefore $X=0$.

0
On

Let the basis of the space with respect to which this is the matrix be $u_1,\dots,u_m,w_1,\dots,w_n$, and let $U$ be the span of $u_1,\dots, u_m$, $W$ the span of $w_1,\dots,w_n$.

We are discussing a linear map $\alpha: U\oplus W\to U\oplus W$ where the matrix of $\alpha|U$ is $A_1$, and the matrix of the induced map $\bar{\alpha}$ on the quotient $U\oplus W / U$ is $A_2$.

We are given that the characteristic polynomials of $A_1$, $A_2$ namely $f_1$ and $f_2$ are coprime.

The characteristic polynomial $f$ of $\alpha$ is of course $f_1 f_2$, which annihilates $\alpha$, so we have in the usual way by the coprimeness of $f_1$and $f_2$ that $$ U\oplus W=\ker f(\alpha)=\ker f_1(\alpha)\oplus \ker f_2(\alpha), $$ and this latter decomposition is into $\alpha$-invariant subspaces. [This is where @lysarus hint is useful.]

Let us identify $\ker f_1(\alpha)$. The matrix of $f_1(\alpha)$ on the basis $u_1,\dots, u_m,w_1,\dots,w_m$ is $$ \begin{pmatrix}f_1(A_1) & *\\O &f_1(A_2)\end{pmatrix}= \begin{pmatrix}O & *\\O &f_1(A_2)\end{pmatrix}. $$ As $f_1$ and $f_2$ are coprime (and $f_2$ annihilates $A_2$) we have that $f_1(A_2)$ must be non-singular. Hence we have that $\ker f_1(\alpha)=U$.

The action of $\alpha$ on the invariant subspace $\ker f_1(\alpha)=U$ is given by $A_1$. Suppose the action of $\alpha$ on the invariant subspace $\ker f_2(\alpha)$ is given by a matrix $B$. Then the action on the quotient $U\oplus W/U=\ker f_1(\alpha)\oplus\ker f_2(\alpha)/\ker f_1(\alpha)$ is given by the matrix $B$ - but we know it is given by $A_2$, so that after a change of basis in $\ker f_2(\alpha)$ we may assume that the matrix of $\alpha$ on $\ker f_2(\alpha)$ is $A_2$.

We have therefore shown that with respect to a suitable basis the matrix of $\alpha$ is $\begin{pmatrix}A_1 & O\\O &A_2\end{pmatrix}$ as required.