Why $\mathrm{adj}(A)\cdot A = A\cdot\mathrm{adj}(A)$?

357 Views Asked by At

I know that $A\cdot\mathrm{adj}(A) = \det(A) \cdot I$, but why $\mathrm{adj}(A)\cdot A = A\cdot\mathrm{adj}(A)$?

5

There are 5 best solutions below

0
On BEST ANSWER

I was asked to turn my comment above into an answer, so here we go:

First note that you already know \begin{equation} \det\left(A^T\right)\cdot I=A^T\cdot \mathrm{adj}\left(A^T\right), \end{equation} too.

Now use \begin{equation} \det\left(A\right)=\det\left(A^T\right) \end{equation} (you get that by using the formula of $\det$ based on permutations) and \begin{equation} \mathrm{adj}\left(A^T\right)=\mathrm{adj}\left(A\right)^T. \end{equation} (you get that by applying the first identity to the minor determinants in the definition for the entries $\mathrm{adj}\left(A\right)_{i,j}$ of the adjungate)

You end up with:

\begin{align} A\cdot\mathrm{adj}\left(A\right)&=\det\left(A\right)\cdot I=\left(\det\left(A\right)\cdot I\right)^{T}\\&=\left(\det\left(A^T\right)\cdot I\right)^{T}=\left(A^T\cdot \mathrm{adj}\left(A^T\right)\right)^T\\&=\mathrm{adj\left(A^T\right)^T}\cdot A=\mathrm{adj\left(A\right)}\cdot A. \end{align}

4
On

Since $A^{-1}=\frac{1}{\mathrm{det}(A)}\mathrm{adj}(A)$ and $A$ represents a finite dimensional linear operator, so a left inverse is also a right inverse i.e. $AA^{-1}=A^{-1}A$. From this is follows easily that $\mathrm{adj}(A)\cdot A = A\cdot\mathrm{adj}(A)$ by canceling the scalar $\frac{1}{\mathrm{det}(A)}$ from both side.

1
On

Let $A\in M_n(K)$. If $K=\mathbb{C}$, then use the Travis argument. If $K$ is a commutative ring with unity, then use the reference (Bill's answer)

Sylvester's determinant identity

given above by Bigbear.

EDIT. @ user3697301

  1. For $K=\mathbb{C}$, Travis in his comment below, gave the key for a complete proof of (*): $adj(A).A=A.adj(A)=\det(A)I_n$. If you do not work, then you cannot do mathematics.

  2. That is interesting is that (*) holds also when $K$ is a commutative ring with unity and the key of the proof is in the MSE reference above.

0
On

Let $A$ be an $n \times n$ matrix, $A_{i,j}$ the $(i,j)$-minor of $A$ and $C_{i,j}$ the $(i,j)$-cofactor of $A$, defined as: $$ C_{i,j} = (-1)^{i+j}A_{i,j}. $$ By definition we know that the adjungate of $A$ is: $$ \operatorname{adj} A = [C_{j,i}]. $$

The cofactor expansion along rows gives for all $i,j=1,\dots,n$: $$ \sum_{k=1}^{n} a_{i,k} C_{j,k} = \delta_{i,j}\det A, $$ and along columns gives for all $i,j=1,\dots,n$: $$ \sum_{k=1}^n a_{k,i}C_{k,j} = \delta_{i,j}\det A, $$ where $\delta_{i,j}$ is the Kronecker delta.

You can express these equations using the definition of the adjungate matrix as the following: $$ A \cdot \operatorname{adj} A = \det A \cdot I_n, $$ and $$ \operatorname{adj} A \cdot A = \det A \cdot I_n, $$ where $I_n = [\delta_{i,j}]$ is the identitiy matrix of size $n \times n$. From here we have that $$ A \cdot \operatorname{adj} A = \operatorname{adj} A \cdot A = \det A \cdot I_n. $$

0
On

You know it is true if $\det(A) \ne 0$. But the invertible matrices are dense in the space of all square matrices, so the conclusion follows since the adjoint and determinant are continuous functions.

This works over the field of reals and complex numbers. Do you want other fields as well? Then argue as follows. You can write your identity as $n^2$ expressions in the coefficients of the matrices. Thus you have $n^2$ polynomials over $n^2$ variables you wish to show are identically zero. But the polynomials are zero whenever you substitute in any real numbers. Hence they must be identically zero.