How does augmenting a square matrix (LHS) with an identity matrix (RHS) and then reducing the square matrix to an identity matrix and performing the same operations on the identity matrix using elementary row operations give the inverse matrix on the (RHS)?
What's the logic behind this? I don't understand the mathematical equations that show this is true.
I still dont understand the logic behind it the equations dont help me.
What does Ei mean?
Suppose that your initial matrix is $A$ (for simplicity, let's say $A$ is 2 by 3, though you can easily see that the dimension doesn't affect this proof).
Each elementary row operation can be represented by multiplying $A$ by some matrix $E_i.$ For example, if you want to replace the second row by the first row minus the second row. Then this is represented by $$E_1 = \begin{bmatrix} 1 & 0 \\ 1 & -1 \end{bmatrix} \rightarrow E_1 A = \begin{bmatrix} 1 & 0 \\ 1 & -1 \end{bmatrix} A .$$ You can see that each elementary row operation is represented by left-multiplying $A$ by some matrix $E_i.$ Note, also, that each elementary row operation is invertible, and thus the matrix $E_i$ that represents this operation is invertible.
So suppose that through some sequence of $n$ elementary row operations, you reduce $A$ to the identity matrix. Then you have the equation $$E_n E_{n-1} \dots E_2 E_1 A = I,$$ where multiplying by the sequence of $E_i$ represents reducing your matrix $A$ to the identity matrix. Thus, from here, you can see that $$A^{-1} = E_n E_{n-1} \dots E_2 E_1 I.$$ Thus, the inverse of $A$ is exactly just by doing these elementary row operations on the identity matrix!