I stumbled upon the following operation on matrices in section 2.2.6 Arrays and Orthogonal Lists in the first volume of The Art of Computer Programming, in an example of working with sparse matrices: $$ \begin{pmatrix} &\vdots&&\vdots&\\ \dots&a&\dots&b&\dots\\ &\vdots&&\vdots&\\ \dots&c&\dots&d&\dots\\ &\vdots&&\vdots& \end{pmatrix} \leadsto \begin{pmatrix} &\vdots&&\vdots&\\ \dots&1/a&\dots&b/a&\dots\\ &\vdots&&\vdots&\\ \dots&-c/a&\dots&d-bc/a&\dots\\ &\vdots&&\vdots& \end{pmatrix} $$ (see it on Google Books). This operation is called there a "pivot step," the pivot in this case beeing the "$a$" entry in the first matrix. This operation is stated to be used in algorithms for inverting matrices and solving [systems of] linear equations, and in the simplex method. It is also cited here on page 141.
I do not know the simplex method, but I know how to solve systems of linear equations, and usually all operations involved are elementary row transformations. However, this "pivot step" operation clearly cannot be obtained as a composition of elementary row transformations, neither does it preserve the rank of the matrix (consider the case of a $2\times 2$ matrix with $a = b = c = d$), and I do not understand its "meaning."
What is this operation?
This special "pivot" step consists of a row scaling with $a^{-1}$ and a linear update adding $-c$ times the new top row to the bottom row. This second operation would normally create a zero in the southwest corner. Here this entry is instead used to store the multiplier, i.e., $l_{ij} \gets -c/a$ for future use.
In reality, this pivotal step is part of what is called an in-place LU factorization. Here the matrix $A$ is overwritten by its factorization $A=LU$, rather than allocate space for new matrices $L = [l_{ij}] $ and $U = [u_{ij}]$.
The factorization is unusual only in the sense that $U$ is scaled by its own diagonal elements, i.e., $u_{ij} \gets u_{ii}^{-1} u_{ij}$ for $i < j$, and the diagonal of $U$ is overwritten with the reciprocal values. The net effect is that subsequent solves involving the same matrix $A$, but new right hand sides $f$, can be carried out with any explicit divisions.
This leads to a minute increase in backward error and an architecture dependent increase in computational speed.