WHY Gauss-Jordan method inverses matrix?

344 Views Asked by At

I know that we start with $[A | I]$ matrix and after applying the method it becomes $[I | A^{-1}]$. My question is why this happens? Have we proved it or so? Thanks

4

There are 4 best solutions below

0
On

The reason this happens is because when you reduce $A$ to the identity by a sequence of row operations, each row operation corresponds to multiplication on the left by some elementary matrix.

So if you have a sequence of row operations applied $R_1,...,R_n$ applied one after another to $A$, and if the corresponding elementary matrices are $M_1,...,M_n$, then when $A$ is reduced to the identity we obtain the equation $$M_n ... M_1 \, A = I $$ Multiplying both sides of this equation on the right by $A^{-1}$ we obtain $$M_n ... M_1 \,I = A^{-1} $$ This means that if you apply the row operations $R_1,...,R_n$ one after another to $I$ then you get $A^{-1}$.

0
On

Doing a row operation corresponds to multiplying from the right with a certain elementary matrix. Gaussian elimination can be viewed as a systematic way to find a sequence of elementary matrices such that $$ B_n\cdots B_2 B_1 A = I $$ when $A$ has full rank. Thus, by definition, the product $B=B_n\cdots B_1$ is the inverse of $A$.

Now multiplying a matrix from the left corresponds to transforming the columns of the right-hand factor one by one. So we generally have $$ B[A\;C] = [BA\;BC]$$ whenever each of $A$ and $C$ have the right number of rows. Setting $C$ to $I$ we get $$ B[A\;I] = [I\;B]$$ because (as argued above) $B$ is the inverse of $A$. So instead of writing out the elementary matrices and multiplying them one by one you can recover them by simply doing the same row operations on a copy of the identity matrix.

3
On

This is because elementary operations on the rows of a matrix correspond to multiplying on the left by an invertible matrix (called a matrix of elementary operation). Thus after a finite number of steps, you obtain $$E_k E_{k-1}\dotsm E_1[\mkern1mu A\;I\mkern2mu]=[\mkern1mu E_k E_{k-1}\dotsm E_1A\;E_k E_{k-1}\dotsm E_1I\mkern2mu]=[\mkern1mu I\;E_k E_{k-1}\dotsm E_1].$$ By identification, you obtain that the submatrix made up of the last $n$ columns is the inverse of $A$: $$E_k E_{k-1}\dotsm E_1=A^{-1}.$$

0
On

If $A$ is an invertibel $n\times n$ matrix, the $k^{\text{th}}$ column $a_k$ of $A^{-1}$ is the solution of $Aa_k=e_k$, where $e_k$ is the $k^{\text{th}}$ unit vector.

To calculate all columns of $A^{-1}$ one has to solve the $n$ linear systems $[A|e_k]$ for $k=1,\dots,n$. Fortunately one may solve them simultaneously by calculating $[A|I]$.