This is equivalent to a result in Prasolov's book on linear algebra whose proof is not clear to me. I need help in understanding why the result is true.
Let $x_1,x_2,\dots,x_n$ be row vectors in $R^n$. Let $e_1,\dots,e_n$ denote the canonical basis row vectors in $R^n.$
Let $M(x_1,\dots,x_n)$ denote the determinant of a matrix with rows $x_1,\dots,x_n$,
Choose an integer $1 \leq k < n$.
Given A, a subset of $\{1,\dots,n\}$ of n - k elements, say A = $\{i_1,\dots,i_{n-k}\}$ where $i_1 < i_2 < \dots < i_{n-k}$, let $M(x_1,\dots,x_k,A)$ denote the determinant of a matrix with rows $x_1,\dots,x_k,e_{i_1},\dots,e_{i_{n-k}}$.
Define similarly $M(A,x_{k+1},\dots,x_{n})$ for each subset A of size k from $\{1,\dots,n\}$.
Then we have, for some suitable choice of signs,
$$ M(x_1,\dots,x_n) = \sum_{A : |A| = n -k} \pm M(x_1,\dots,x_k,A) M(A^{c},x_{k+1},\dots,x_n).$$
This follows from the generalized laplace expansion of the determinant from the first $k$ rows.
We have, $$ M(x_1,\dots,x_n) = \sum_{A : |A| = k}\pm S(1,\dots,k;A) S(k+1,\dots,n;A^c) $$
where for $A \subset \{1,\dots, n\}$ with $|A| = k$, $S(1,\dots,k;A)$ denotes the determinant of the submatrix of the matrix with rows $x_1,\dots,x_n$ with row indices $1,\dots,k$ and column indices $j_1 < j_2 < \dots < j_k$ where $A = \{j_1,\dots,j_k\}$.
$S(k+1,\dots,n;A)$ is defined similarly.
The sign corresponding to $A = \{j_1,\dots,j_k\}$ above is $(-1)^{ (1 + \dots + k + j_1 + \dots j_k)}$.
$S(1,\dots,k;A)$ is identical to the determinant of the following matrix : $ \begin{pmatrix} x_{1,j_1} & x_{1,j_2} & \dots & x_{1,j_k} & x_{1,l_1} & x_{1,l_2} & \dots & x_{1,l_{n-k}} \\ x_{2,j_1} & x_{2,j_2} & \dots & x_{2,j_k} & x_{2,l_1} & x_{2,l_2} & \dots & x_{2,l_{n-k}} \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ x_{k,j_1} & x_{k,j_2} & \dots & x_{k,j_k} & x_{k,l_1} & x_{k,l_2} & \dots & x_{k,l_{n-k}} \\ 0 & 0 & \dots & 0 & 1 & 0 & \dots & 0 \\ 0 & 0 & \dots & 0 & 0 & 1 & \dots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \dots & 0 & 0 & 0 & \dots & 1 \\ \end{pmatrix} $
where $x_{i,j}$ denotes the $(i,j)^{\text{th}}$ element of the matrix with rows $x_i$ and $A = \{j_1,\dots,j_k\}$ and $A^c = \{l_1,\dots,l_{n-k}\}.$ Rearranging the columns we get $$ S(1,\dots,k;A) = \pm M(x_1,\dots,x_k,A^c). $$ Similar statement can be made about $ M(x_{k+1},\dots,x_n;A^c)$ and the result follows.