In my first year at Uni I learned that one could find the inverse of a matrix, $A = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} $, by solving an equation/system such as
$$\begin{array}{cc|cc} a_{11} & a_{12} & 1 & 0 \\ a_{21} & a_{22} & 0 & 1 \end{array} $$
into
$$\begin{array}{cc|cc} 1 & 0 & b_{11} & b_{12} \\ 0 & 1 & b_{21} & b_{22} \end{array} $$
where $B = \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix} = A^{-1}$.
After learning that matrices are essentially linear transformations stretching other matrices/vectors by a certain factor, the determinant. I now wonder how this simple transformation makes sense.
If usually a matrix $A_{n\times m}$ can transform a matrix $B_{m \times q}$ into $C_{n \times q}$ and the inverse doing the inverse, from $n\times q$ to $n\times m$ I wonder how this simple transformation can simply do that. Seems like magic to me.
(Remark: In case I violated some rules on questions, just let me know, I'll try to fix it right away.)
Row operations can be represented by elementary matrices reducing elimination to matrix multiplication. If I have an invertible matrix $A$ I can decompose it into a collection of elementary matrices $A=E_1E_2...E_n$ which you can think of as starting with the identity matrix then doing row operations to produce $A$. Now if we're interested in some $B$ so that $AB=I$ with $I$ the identity matrix I can multiply on the left by the inverses of the elementary matrices in reverse order because $(E_1E_2...E_n)^{-1}=E_n^{-1}E_{n-1}^{-1}...E_1^{-1}$ and this justifies the algorithm you've described.