Say you have two linear systems $$Ax=y$$
$$Bu=v$$
It's possible to combine these into a single larger equation, as follows
$$\begin{bmatrix}A & 0\\0 & B\\\end{bmatrix} \begin{bmatrix}x\\u\end{bmatrix} = \begin{bmatrix}y\\v\end{bmatrix}$$
However, if you know that $x_i=u_j$ and $y_i=u_j$ for some $i$ and $j$ then it's possible to eliminate two columns of the matrix and two rows of the matrix by a few relatively simple steps (equating two of the linear equations and substituting for the old variables).
Firstly, what is this operation called? Presumably this is a well-studied and commonly used operation.
Secondly (although with an answer to the first I may be able to figure this out), how efficiently can it be done as a property of how large the original matrices are and how many pairs of variables can be eliminated. Particularly if the matrices are very sparse.
Here's a full worked example of the reduction I'm describing:
$$\begin{bmatrix}a&b&0&0\\c&d&0&0\\0&0&e&f\\0&0&g&h\end{bmatrix} \begin{bmatrix}u_1\\u_2\\u_3\\u_4\end{bmatrix} = \begin{bmatrix}v_1\\v_2\\v_3\\v_4\end{bmatrix}$$
Given that $u_2=u_3$ and $v_2=v_3$:
$$\begin{bmatrix}a&b&0\\c&d&0\\0&e&f\\0&g&h\end{bmatrix} \begin{bmatrix}u_1\\u_2\\u_4\end{bmatrix} = \begin{bmatrix}v_1\\v_2\\v_2\\v_4\end{bmatrix}$$
Which gives $cu_1+du_2=eu_2+fu_4$, so $u_2=\frac{c}{e-d}u_1+\frac{-f}{e-d}u_4$. Substituting gives:
$$\begin{bmatrix} a+\frac{c}{e-d}b& \frac{-f}{e-d}b\\ \frac{c}{e-d}g& h+\frac{-f}{e-d}g\end{bmatrix} \begin{bmatrix}u_1\\u_4\end{bmatrix} = \begin{bmatrix}v_1\\v_4\end{bmatrix}$$
Maybe Gaussian elimination? See: https://en.wikipedia.org/wiki/Gaussian_elimination