Finding inverse using inverse formula

74 Views Asked by At

Given matrix A $$ A= \begin{bmatrix} 1&2&3\\ 0&2&3\\ 0&0&3\\ \end{bmatrix} $$ I calculated $\det(A)= 6$ and below:

  • $(A^{-1})_{11} = 6$
  • $(A^{-1})_{12} = 0$
  • $(A^{-1})_{13} = 0$
  • $(A^{-1})_{21} = -6$
  • $(A^{-1})_{22} = 3$
  • $(A^{-1})_{23} = 0$
  • $(A^{-1})_{31} = 0$
  • $(A^{-1})_{32} = -3$
  • $(A^{-1})_{33} = 2$

Which lead me to: $$ (A^{-1})= \frac 16\ \begin{bmatrix} 6&0&0\\ -6&3&0\\ 0&-3&2\\ \end{bmatrix} $$

but the answer is $$ (A^{-1})= \frac16\ \begin{bmatrix} 6&-6&0\\ 0&3&-3\\ 0&0&2\\ \end{bmatrix} $$

I did not know what I did wrong. I'd appreciate any help from you guys!

2

There are 2 best solutions below

0
On BEST ANSWER

To address the question writer's question on why do we need to take inverse, I derive from Laplace expansion formula, which he/she knows well and he/she has used in the above calculations. In this way, he/she will really understand how the transpose of the cofactor matrix interacts with the matrix itself through matrix multiplication to give a diagonal matrix.

The unordered list of numbers are call cofactors. I'll prefer using $C_{ij}$ to denote "$(A^{-1})_{ij}$ in the question". I hope following classical argument is accessible to any interested high school students.

By the well-known Laplace expansion formula for calculating determinants, for any $i \in \lbrace1,\dots,n\rbrace$

\begin{align} \det(A) &= \sum_{j=1}^n (-1)^{i+j} a_{ij} \det(M_{ij}) \\ &= \sum_{j=1}^n a_{ij} C_{ij} \label{1}\tag{1} \end{align}

But a matrix product of $AB$ has entries of the form $$(AB)_{ij} = \sum_{k=1}^n a_{ik} b_{kj} \label{2}\tag{2}.$$

To make \eqref{1} resembles more \eqref{2}, we consider the transpose of the cofactor matrix, whose $(j,i)$-th entry is $C_{ij}$. For any $i$ fixed,

$$ \det(A) = \sum_{j=1}^n a_{ij} (C^T)_{ji} \label{1'}\tag{1'} $$

We change the $j$ in \eqref{1'} to $k$. For each $i$ fixed,

$$ \det(A) = \sum_{k=1}^n a_{ik} (C^T)_{ki} \label{1''}\tag{1''} $$

\eqref{1''} represents any diagonal entry of $AC^T$. To completely address the question writer's addition question in the comments, one needs to justify that the non-diagonal entries of $AC^T$ equals zero.

For each $(i,j)$ fixed with $i\ne j$, $$ (AC^T)_{ij} = \sum_{k=1}^n a_{ik} (C^T)_{kj} = \sum_{k=1}^n a_{ik} C_{jk} \label{3}\tag{3} $$ This is in fact the Laplace expansion of the determinant $$ \begin{vmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots& \vdots & \ddots & \vdots \\ a_{i1} & a_{i2} & \cdots & a_{in} \\ \vdots& \vdots & \ddots & \vdots \\ \color{red}{a_{i1}} & \color{red}{a_{i2}} & \cdots & \color{red}{a_{in}} \\ \vdots& \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{vmatrix} $$ Note that the $\color{red}{j\text{-th row}}$ of the above matrix is replaced by the $i$-th row of $A$ as the Laplace expansion formula suggests. Since any matrix with two identical rows has zero determinant, we conclude the following useful formula. $$\bbox[2px, border: 1px solid red]{\det(A)I_n=AC^T=C^TA}$$ Uptil this step, this works for entries defined on any commutative ring (equipped with addition and abelian multiplication, I don't know whether this works for non-commutative rings.)

To get the well-known matrix formula, we divide both sides by $\det(A)$. This last step requires the entries to be defined on a field.

0
On

Your lecture notes seem to be incorrect. Cramer's rule states that

$$A^{-1}=\frac{1}{\det(A)}\cdot\mathrm{adj}(A)$$

where $\mathrm{adj}(A)=C^{\top}$ and $C_{ij}=(-1)^{i+j}M_{ij}$ as described here. You (or your professor) seem to have missed the transpose.