How prove this $|A||M|=A_{11}A_{nn}-A_{1n}A_{n1}$

302 Views Asked by At

Question:

let the matrix

$A=(a_{ij})_{n\times n},i=1,2,\cdots,n,j=1,2,\cdots,n$, and the matrix

$M=(a_{ij})_{(n-2)\times (n-2)},$

mean that

$$A=\begin{bmatrix} a_{11}&\cdots&a_{1n}\\ \vdots& M&\vdots\\ a_{n1}&\cdots&a_{nn} \end{bmatrix}$$ show that

$$\det|A|\cdot \det |M|=A_{11}A_{nn}-A_{1n}A_{n1}$$ where $A_{ij}$ is cofactor with the matrix $A$.

This problem is from linear problem book ,and this problem I can't deal it. because this value $$|A||M|$$ I can't choose something to deal it?

2

There are 2 best solutions below

0
On

EDIT: Originally I thought that proof along these lines is quite simple. Later I found a mistake in my attempted proof. (You can see it below.) I still think that the proof can be done this way, but it is rather messy and we have to be careful about the signs and whether no non-zero terms are missing in the sums we obtain. So it is easy to make a mistake. So I hope that somebody will come up with a more elegant approach.


First notice that if we multiply the matrix of the above form from the right or from the left by $$B'=\begin{pmatrix}1&0&0\\0&B&0\\0&0&1\end{pmatrix}$$ the both sides of the equality are multiplied by $|B|^2$.

To see this, it suffices to notice that $A$ is multiplied by $B'$ and $|B'|=|B|$. The middle $M$ is changed to $BM$ (or $MB$) so the determinant of the new middle is $|M|\cdot|B|$. And the minors of $A$ which appear on the RHS are multiplied either by $\begin{pmatrix}1&0\\0&B\end{pmatrix}$ or by $\begin{pmatrix}B&0\\0&1\end{pmatrix}$. So each determinant is multiplied by $|B|$.

Now we can find non-singular matrices $B_{1,2}$ such that $$\begin{pmatrix}1&0&0\\0&B_1&0\\0&0&1\end{pmatrix} \begin{pmatrix} a_{11}&\cdots&a_{1n}\\ \vdots& M&\vdots\\ a_{n1}&\cdots&a_{nn} \end{pmatrix} \begin{pmatrix}1&0&0\\0&B_2&0\\0&0&1\end{pmatrix}= \begin{pmatrix} b_{11}&\cdots&b_{1n}\\ \vdots& D&\vdots\\ b_{n1}&\cdots&b_{nn} \end{pmatrix}$$ where $D$ is a diagonal matrix.

If $M$ is non-singular, we can simply take $B_1=M^{-1}$ and $B_2=I$, but in any case we can get diagonal matrix from $M$ combining elementary row and elementary column operations.

The above arguments show that it is sufficient to prove this in the case when $M$ is a diagonal matrix.


Now if $M=\operatorname{diag}(d_2,\dots,d_{n-1})$ then we can apply Leibniz formula to the matrix $$|A|=\begin{vmatrix} a_{11}&\cdots&a_{1n}\\ \vdots& M&\vdots\\ a_{n1}&\cdots&a_{nn} \end{vmatrix}.$$

By analyzing which permutations have nonzero contribution we find out that $$|A|=a_{11}d_2\cdots d_{n-1}a_{nn} - a_{1,n}d_2\cdots d_{n-1}a_{n1} - \sum_{i=2}^{n-1} a_{n,n} a_{i,1}a_{1,i} \frac{d_2\cdots d_{n-1}}{d_i} - \sum_{j=2}^{n-1} a_{11} a_{j,n}a_{n,j} \frac{d_2\cdots d_{n-1}}{d_i} + \sum_{i=2}^{n-1} a_{1,n} a_{i,1}a_{n,i} \frac{d_2\cdots d_{n-1}}{d_i} + \sum_{i=2}^{n-1} a_{n,1} a_{i,1}a_{n,i} \frac{d_2\cdots d_{n-1}}{d_i} + \sum_{\substack{2\le i,j \le n-1\\i\ne j}} a_{1,i}a_{n,j}a_{i,1}a_{j,n} \frac{d_2\cdots d_{n-1}}{d_i d_j} - \sum_{\substack{2\le i,j \le n-1\\i\ne j}} a_{1,i}a_{n,j}a_{i,n}a_{j,1} \frac{d_2\cdots d_{n-1}}{d_i d_j} $$ I will not include the detailed analysis of all cases. Let us have a look, for example, what happens if we choose $a_{1,i}$ in the first row and $a_{n,j}$ in the last row. (This requires $i\ne j$.) Then $d_i$ and $d_j$ cannot be used. If we want to get non-zero elements in the remaining $(n-2)$ rows, we must use all remaining diagonal elements. This leaves us with using either $a_{i,1}$ or $a_{i,n}$ in the $i$-th row. This choice determines the choice in the $j$-th row. So we get either $a_{1,i}a_{n,j}a_{i,1}a_{j,n}$ or $a_{1,i}a_{n,j}a_{i,n}a_{j,1}$ multiplied by all diagonal elements with the exception of $d_i$ and $d_j$. Then we also have to check the sign of the permutation.

Using similar analysis we can find out that:

$$A_{11} = a_{11} d_2\cdots d_{n-1} - \sum_{i=2}^n a_{1,i} a_{i,1} d_2\cdots d_{i-1} d_{i+1} \dots d_n $$

$$A_{nn} = a_{nn} d_2\cdots d_{n-1} - \sum_{j=2}^n a_{n,j} a_{j,n} d_2\cdots d_{j-1} d_{j+1} \dots d_n $$

$$A_{n1} = -a_{1n} d_2 \cdots d_{n-1} + \sum_{i=1}^n a_{1,i} a_{i,n} d_2\cdots d_{i-1} d_{i+1} \dots d_n$$

$$A_{1n} = -a_{n1} d_2 \cdots d_{n-1} + \sum_{j=1}^n a_{n.j}a_{j,1} d_2\cdots d_{j-1} d_{j+1} \dots d_n $$

Now we can check that if we multiply $A_{11}A_{nn}-A_{n1}A_{1n}$ we get $|A|\cdot|M|$. (The only minor difference is that we do get also summands for $i=j$, but they cancel out.)


EDIT: This was my original attempt, which is incorrect. Mistakes in this approach are incorrect assumption that only two terms in expansion of $|A|$ are non-zero and incorrect expression of $A_{11}$, $A_{nn}$, $A_{1n}$ and $A_{n1}$.

Now if $M=\operatorname{diag}(d_2,\dots,d_{n-1})$ then we can see directly from Leibniz formula that $$|A|=\begin{vmatrix} a_{11}&\cdots&a_{1n}\\ \vdots& M&\vdots\\ a_{n1}&\cdots&a_{nn} \end{vmatrix} = a_{11}d_2\dots d_{n-1} a_{nn} - a_{1n} d_2\dots d_{n-1} a_{n1},$$ since the contribution of all other permutations to the determinant is either zero or they come in pairs which cancel out.

And from this we get $$|A|\cdot |M| = (a_{11}d_2\dots d_{n-1}) (d_2\dots d_{n-1}a_{nn}) - (a_{1n} d_2\dots d_{n-1}) (d_2\dots d_{n-1}a_{n1}) = A_{nn}A_{11} - A_{n1}A_{1n}. $$

0
On

You can play the good old indeterminate trick. Suppose the matrix entries are taken from a field $K$ (an integral domain is also OK: it suffices to prove the equality over its field of fractions). Let $a_{11},\ldots,a_{nn}$ be $n^2$ indeterminates and $A=(a_{ij})_{i,j\in\{1,2,\ldots,n\}}$. Then $A$ is a matrix over the field of fractions $F$ of the polynomial ring $K[a_{11},\ldots,a_{nn}]$. Write $$ A=\begin{bmatrix} a_{11}&p^\top&a_{1n}\\ u &M &v\\ a_{n1}&q^\top&a_{nn} \end{bmatrix}. $$ Since the entries of $M$ are $(n-2)^2$ different indeterminates, $M$ is invertible over $F$. Therefore, by performing some appropriate row operations and also some column operations, we get \begin{align} |A| =\left|\begin{matrix} a_{11}&p^\top&a_{1n}\\ u &M &v\\ a_{n1}&q^\top&a_{nn} \end{matrix}\right| &=\left|\begin{matrix} a_{11}-p^\top M^{-1}u&0&a_{1n}-p^\top M^{-1}v\\ u &M &v\\ a_{n1}-q^\top M^{-1}u&0&a_{nn}-q^\top M^{-1}v \end{matrix}\right|\\ &=\left|\begin{matrix} a_{11}-p^\top M^{-1}u&0&a_{1n}-p^\top M^{-1}v\\ 0 &M &0\\ a_{n1}-q^\top M^{-1}u&0&a_{nn}-q^\top M^{-1}v \end{matrix}\right| = |B|, \end{align} where we denote the big matrix on last line by $B$. Note that the cofactors at the four corners of $A$ are identical to their counterparts of $B$, because we can obtain the corresponding submatrices of $B$ by applying the above row and column operations to the submatrices of $A$. (Alternatively, consider Schur complements.) It is straightforward to verify that $$ \left|\begin{matrix} b_{11}&0&b_{1n}\\ 0 &M &0\\ b_{n1}&0&b_{nn} \end{matrix}\right| |M| = (b_{11}b_{nn}-b_{1n}b_{n1})|M|^2 = B_{11}B_{nn}-B_{1n}B_{n1}. $$ Therefore the polynomial identity $|A| = A_{11}A_{nn}-A_{1n}A_{n1}$ (here both sides are polynomials in $n^2$ variables $a_{11},\ldots,a_{nn}$) holds as well. So, when the $n^2$ indeterminates are specialised to any $n^2$ values in $K$, the equality also holds.