Currently reading through this document : https://www.jstor.org/stable/2100866
First few definitions (extracted from the paper)
Given an undirected graph $G = (V,E)$ for each $v \in V$ we define $$ A(v) = \left\{ w \in V : (v,w) \in E \right\} \\ B(v) = \left\{ u \in V : (u,v) \in E \right\} $$
Given an undirected graph $G = (V,E)$ for each $v \in V$ we define $$ D(v) = \left\{ (x,y) : x \to v, v \to y, y \notin A(x), x \neq y \right\} $$
Later the following is stated:
The notion of vertex elimination arises in the solution fo sparse system of linear equations by Gaussian Elimination. Given any $n \times n$ matrix $M = (m_{ij})$ which represents the coefficients of a set of linear equations, we can construct an ordered graph $G_{\alpha} = (V,E,\alpha)$, where vertex $v_i$ corresponds to row $i$, and $(v_i,v_j) \in E$ iff $m_{ij} \neq 0$ and $i \neq j$. The unordered graph $G = (V,E)$ corresponds to the equivalence class of matrices $PMP^T$, where $P$ is any permutation matrix. If we solve the system with coefficients $M$ using Gaussian Elimination, eliminating the variables in the order $1,2,\ldots, n$ then the edges of $D_{i-1}(v_i)$ correspond exactly to the new non zero-elements created when row $i$ is eliminated.
To test this I tried one step of reduction for the following matrix
$$ M = \left( \begin{matrix} 2 & 0 & 1 & 0 & 1 \\ 4 & 1 & 0 & 0 & 1 \\ 0 & 5 & 0 & 1 & 0 \\ 7 & 0 & 0 & 0 & 4 \\ 1 & 0 & 0 & 0 & 1 \end{matrix} \right) $$
And a step of elimination (subtracting to the 2nd row twice the first one) yields
$$ M = \left( \begin{matrix} 2 & 0 & 1 & 0 & 1 \\ 0 & 1 & -2 & 0 & -1 \\ 0 & 5 & 0 & 1 & 0 \\ 7 & 0 & 0 & 0 & 4 \\ 1 & 0 & 0 & 0 & 1 \end{matrix} \right) $$
Now on the original matrix calculating $D(2)$ should give me edges corresponding to the new non-zero entries. However I don't understand how this map with the step of elimination I did.
I think I am miss-understanding the meaning of eliminated row.
Can anyone clarify the meaning with the given example of matrix $M$?