The multiplicative property of cofactor matrices

1.3k Views Asked by At

In this question, we just consider square matrices.

The cofactor matrix $\mathrm{C}(\mathrm{A}) = (c_{ij})$ of a $n$-by-$n$ matrix $\mathrm{A} = (a_{ij})$ is a $n$-by-$n$ matrix ($n > 0$) with

$$c_{ij} = (-1)^{i+j} \cdot \det{(\mathrm{A}_{ij})} \; \forall i,j,$$

where $\mathrm{A}_{ij}$ is the matrix remaining after removing the $i$-th row and the $j$-th column from $\mathrm{A}$.

Problem: Let $\mathrm{A},\mathrm{B}$ be $n$-by-$n$ matrices. Prove that $\mathrm{C}(\mathrm{AB}) = \mathrm{C}(\mathrm{A})\cdot\mathrm{C}(\mathrm{B})$.

I start learning Linear Algebra recently, and this property is one of the problems giving me hard time proving. After giving up, I begin searching for a solution but not succeeding (my effort might not be enough). Therefore, I hope I can find a solution or a link to a solution here, and it's better that this solution uses no linear space and mapping knowledge. For further information about cofactor matrices and its relatives, you can read the Wikipedia article linked.

Thanks in advance!

2

There are 2 best solutions below

3
On BEST ANSWER

Let $C(A)$ be matrix with entries $\alpha_{ij}$, and likewise $B(A)=(\beta_{ij})$. Then the entries $\gamma_{ij}$ of $C(A)C(B)$ can be written as $$ \gamma_{ij} = \sum_{k=1}^n \alpha_{ik} \beta_{kj} = \sum_{k=1}^n(-1)^{i+j+2k} \det(A_{ik}) \det(B_{kj}) = (-1)^{i+j}\sum_{k=1}^n\det(A_{ik}) \det(B_{kj}). $$ One can check that $$ (AB)_{ij} = A_{i\cdot}B_{\cdot j}, $$ where $A_{i\cdot}$ and $B_{\cdot j}$ are submatrices of $A$ and $B$ without the $i$th row and $j$th columns respectively, $A_{i\cdot}\in\mathbb R^{n-1,n}$, $B_{\cdot j}\in\mathbb R^{n,n-1}$ (or any other field instead of $\mathbb R$).

By the Cauchy-Binet formula, $$ \det(A_{i\cdot}B_{\cdot j})= \sum_{k=1}^n\det(A_{ik}) \det(B_{kj}). $$ This proves $$ \gamma_{ij} = (-1)^{i+j}\sum_{k=1}^n\det(A_{ik}) \det(B_{kj}) =(-1)^{i+j}\det((AB)_{ij} ), $$ hence the entries of $C(AB)$ and $C(A)C(B)$ conincide.

5
On

Since you are just beginning to learn linear algebra, I'm afraid the following proofs are not appropriate. Yet, I'll leave them here for future references.

Proof 1. (Assume you are talking about real or complex matrices.) For any real or complex matrix $X$ and $t\in\mathbb R$, note that $\det(X+tI)$ and the entries of $C(X+tI)$ are polynomials (and hence continuous functions) in $t$. Since every polynomial in $t$ has only finitely many roots, it follows that when $t>0$ is sufficiently small, $X+tI$ is invertible.

Now, from $X^{-1}=\frac1{\det(X)}\operatorname{adj}(X)$ one obtains $C(X) = \det(X)(X^{-1})^T$. Therefore, if $X$ and $Y$ are invertible matrices, we have \begin{align} C(XY)&=\det(XY)((XY)^{-1})^T\\ &=\det(XY)(Y^{-1}X^{-1})^T\\ &=\det(XY)(X^{-1})^T(Y^{-1})^T\\ &=\det(X)\det(Y)(X^{-1})^T(Y^{-1})^T\\ &=C(X)C(Y). \end{align} In particular, $C((A+tI)(B+tI))=C(A+tI)C(B+tI)$ when $t>0$ is sufficiently small. Let $t\to0$, the result follows.

The above proof is an example of the so-called continuity argument. Often, if some property is known to be preserved in the limit, and we have difficulties in proving the property at a certain point, we may construct a sequence that converges to that point, and prove that the property holds for the sequence instead.

Proof 2. (Assume you are talking about matrices over, say, an integral domain or a field.) As $C(X) = \det(X)(X^{-1})^T$ when $X$ is invertible, it is straightforward to show that the target equality holds for invertible matrices. In particular, the target equality holds when $A,B$ are two matrices whose entries are $2n^2$ different indeterminates $a_{11},a_{12},\ldots,a_{nn},b_{11},b_{12},\ldots,b_{nn}$. Hence the equality also holds when $A$ and $B$ are specialised to any particular values.

One more impressive use of the above technique of considering "generic" matrices is to prove the Cayley-Hamilton theorem (which is usually introduced in the middle of an introductory linear algebra course). See q313284.