Prove $ \mbox{adj} (A) = \left[ \frac{\partial }{\partial a_{ij} } \det(A) \right]^T $

152 Views Asked by At

I have a bit of trouble proving an adjugate matrix equality for $A_{n\times n} = [a_{ij}]$,

$$ \mbox{adj} (A) = \left[ \frac{\partial }{\partial a_{ij} } \det(A) \right]^T, \quad i, j = 1,\dots,n $$

I tried by definition

$$ \mbox{adj} (A) = \left[ (-1)^{i+j} \sum_{\sigma \in C_n } \left( \mbox{sgn} (\sigma) \prod_{k=1\\k\neq i }^n a_{k,\sigma(k)} \right) \right]^T $$

where $C_n = \{\sigma \in S_n : \sigma(i) = j \} $ and $S_n$ is the set of all $n$-permutation of the set $\{1,\dots,n\}$ . $sgn(\sigma)$ is $-1$ if permuatation $\sigma$ is odd , it's $1$ otherwise . $\sigma(k)$ returns the $k$th element of $\sigma$ . The thing within the square bracket is the $(i,j)-$cofactor . I pause here and look at the RHS of the statement

$$ \left[ \frac{\partial }{\partial a_{ij} } \det(A) \right]^T = \left[\sum_{\sigma\in C_n } \left( \mbox{sgn} (\sigma) \prod_{k=1\\ k \neq i}^n a_{k,\sigma(k)} \right) \right]^T $$

But I'm unable to deal with the signs ...

3

There are 3 best solutions below

2
On BEST ANSWER

You can use Laplace expansion wiki to write $$ \det(\mathbf{A}) = \sum_j C_{ij} A_{ij} $$ where $C_{ij}$ is called the cofactor of $A_{ij}$.

It follows $$ \frac{\partial \det(\mathbf{A})}{\partial A_{ij}} = C_{ij} $$ or in matrix form $$ \frac{\partial \det(\mathbf{A})}{\partial \mathbf{A}} = \mathbf{C} = \left[\mathrm{adj}(\mathbf{A}) \right]^{T} $$

0
On

Let's prove this identity in a slightly different way than the way you have tried to do. Let $\varphi^{\ell}$ be the only not decreasing bijection between the sets $\{1,2,\ldots, n-2,n-1\}$ and $\{1,2,\ldots,\ell-1,\ell+1,\ldots n-1,n\}$. Observe that for all fixed $\ell \in \{1,2,\ldots, n-1,n\}$ we have $$ \varphi^{\ell}(1)\leq \varphi^{\ell}(2)\leq\ldots \leq \varphi^{\ell}(n-2)\leq\varphi^{\ell}(n-1) $$ In this notation, the Laplace's expansion in $j$-column of $\det(A)$ is $$ \det(A)=\sum_{i=1}^{n} a_{ij}\cdot (-1)^{i+j}\cdot \left( \sum_{\sigma \in S^{n-1}}\prod_{k = 1}^{n-1}a_{\varphi^{i}(k)\sigma(\varphi^{j}(k))} \right) $$ and $$ \dfrac{\partial}{\partial \,a_{ij}} \det(A) = (-1)^{i+j}\cdot \left( \sum_{\sigma \in S^{n-1}}\prod_{k = 1}^{n-1}a_{\varphi^{i}(k)\sigma(\varphi^{j}(k))} \right) $$ Now note that the determinant of the $(n-1)\times (n-1)$ submatrix of $A$ that is obtained by deleting row $i$ and column $j$ is given by $ \left( \sum_{\sigma \in S^{n-1}}\prod_{k = 1}^{n-1}a_{\varphi^{i}(k)\sigma(\varphi^{j}(k))} \right) $. Then $$ \mathrm{adj}(A)=\left( (-1)^{i+j}\cdot \left( \sum_{\sigma \in S^{n-1}}\prod_{k = 1}^{n-1}a_{\varphi^{i}(k)\sigma(\varphi^{j}(k))} \right) \right)_{n\times n} $$

4
On

You can surely use the Laplace Expansion, but (imo) it is also interesting to see it right from the definition of the determinant and writing the $k$-th column as $\vec{a_k}=\sum_{i_k=1}^n a_{i_k,k} \, \vec{e_{i_k}}$ where $\vec{e_1}=(1,0,0,...,0)^t$, $\vec{e_2}=(0,1,0,...,0)^t, ..., \vec{e_n}=(0,0,0,...,1)^t$ . Then $$\det (A) = \det \left( \vec{a_1} \cdots \vec{a_k} \cdots \vec{a_n} \right) \\ =\sum_{i_1=1}^n\cdots\sum_{i_k=1}^n\cdots\sum_{i_n=1}^n a_{i_1,1}\cdots a_{i_k,k} \cdots a_{i_n,n} \det \left( \vec{e_{i_1}} \cdots \vec{e_{i_k}} \cdots \vec{e_{i_n}} \right)$$ $$\frac{{\rm d}}{{\rm d}a_{j,k}} \det(A) =\sum_{\substack{i_1,...,i_{k-1},\\ i_{k+1},...,i_n=1}}^n a_{i_1,1}\cdots a_{i_{k-1},k-1} a_{i_{k+1},k+1} \cdots a_{i_n,n} \det \left( \vec{e_{i_1}} \cdots \vec{e_{i_{k-1}}}\vec{e_{j}}\vec{e_{i_{k+1}}} \cdots \vec{e_{i_n}} \right) \tag{1}$$ where it is clear that if $i_1,...,i_{k-1},j,i_{k+1},...,i_n$ are not all distinct, the last determinant is zero giving no contribution. It is also clear, that its value is obtained from $\det\left(\vec{e_1} \vec{e_2} \cdots \vec{e_n} \right)=1$ by permuting the columns, which gives just a sign-factor corresponding to the number of permutations. We can then permute $\vec{e_j}$ in the $k$-th column to the $n$-th column, giving a factor of $(-1)^{n-k}$. Furthermore, the $j$-th row can then be permuted to the $n$-th row, giving a factor of $(-1)^{n-j}$. If we then define the dimension $n-1$ basis-vectors $\vec{e_1^*},...,\vec{e_{j-1}^*},\vec{e_{j+1}^*},...,\vec{e_n^*}$, where the $j$-th row is deleted from the unstarred basis-vectors, the resulting $n \times n$ matrix has the form $$E_{jk} = \begin{pmatrix} \vec{e_{i_1}^*} & \cdots & \vec{e_{i_{k-1}}^*} & \vec{e_{i_{k+1}}^*} & \cdots & \vec{e_{i_{n}}^*} & \vec{0} \\ 0 & \cdots & 0 & 0 & \cdots & 0 & 1 \end{pmatrix} = \begin{pmatrix} E_{jk}^* & \vec{0} \\ \vec{0}^t & 1 \end{pmatrix}$$ which has a value $1$ at the $n$-th row/column and a $(n-1)\times (n-1)$ (upper-left) sub-matrix $E_{jk}^*$ as if we deleted the $j$-th row and $k$-th column. Clearly, $E_{jk}$ and $E_{jk}^*$ have the same determinant, since the number of permutations required to obtain the identity-matrix is the same.

Hence, since we can exclude the summation over the value $j$, (1) can be viewed as a representation for the $(j,k)$-minor times a sign factor of $(-1)^{2n-j-k}$ $$\frac{{\rm d}}{{\rm d}a_{j,k}} \det (A) = (-1)^{2n-j-k} \sum_{\substack{i_1,...,i_{k-1},i_{k+1},...,i_n=1 \\ i_1,...,i_{k-1},i_{k+1},...,i_n \neq j}}^n a_{i_1,1}\cdots a_{i_{k-1},k-1} a_{i_{k+1},k+1} \cdots a_{i_n,n} \, \det\left(\vec{e_{i_1}^*} \cdots \vec{e_{i_{k-1}}^*} \vec{e_{i_{k+1}}^*} \cdots \vec{e_{i_{n}}^*}\right) = (-1)^{j+k} \det\left(\vec{a_{1}^*} \cdots \vec{a_{k-1}^*} \vec{a_{k+1}^*} \cdots \vec{a_{n}^*}\right) = (-1)^{j+k} M_{jk} = C_{jk}$$ where $$\vec{a_m^*}=\sum_{\substack{i=1 \\ i\neq j}}^n a_{i,m} \vec{e_i^*} \quad \text{for} \quad m=1,...,k-1,k+1,...,n \, ,$$ $M_{jk}$ is the $(j,k)$-minor of $A$ and $C$ the cofactor matrix.

Maybe this answer is relatively long compared to Stephs answer, but it doesn't require any prerequisites s.a. the Laplace-Expansion.