We are given a matrix $A\in \{0,1\}^{m, n}$, where $n=a+b$. We know that foreach row $i$, the following holds: all entries are $0$, except for two of them, which are $1$. One lies in the range $[1..a]$, and the other in the range $[a+1, b]$. Formally $\forall i\in \{1..m\}\quad\exists j\in\{1..a\}, k\in\{a+1..a+b\}.A_{i,l}=1\iff l=j \lor l=k$.
It is asked to prove that for all square sub-matrices of A, the determinant lies in $\{-1, 0, 1\}$. As hint, it's said that induction over the size of the sub-matrices must be used.
It's trivial to see that all $1x1$ sub-matrices have determinant $\in \{0,1\}$, and simple enumeration of the 16 possible $2x2$ cases confirms this is also the case for $2x2$ matrices. My idea was to use Laplace's formula to compute the determinant. Since we only have two 'ones' per row, we only have 2 sub-terms to add. Using induction we know these sub-terms also have value $\in\{-1,0,1\}$, but I don't know how to prove that those two terms aren't both $-1$ or both $1$.
Thank you for any idea
Let us say that a submatrix is of type $(k,l)$ if it spans $k$ columns from $[1..a]$ and $l$ columns from $[a+1..b]$.
If $k$ = 0 or $l = 0$, the only possibility how the subdeterminant could be nonzero is if the submatrix is a permutation matrix, because it can only have at most one $1$ in each row. So, in these cases, the determinant is $\in \{-1, 0, 1\}$.
For a $k>0$, $l>0$, let us assume that the property is known for all submatrices of types $(k-1,l)$ and $(k,l-1)$. Take a submatrix $B$ of type $(k,l)$ and denote its elements $B_{ij}$.
In the $(k+l)$-th row of $B$, there can be either
Take the submatrices omitting the $i$-th column or the $j$-th column along with the last row from $B$ (denoted $B_1$ and $B_2$, respectively).
If $|B_1| = 0$ or $|B_2| = 0$, it's trivial, so let's find how both could be nonzero.
The matrices $B_1$ and $B_2$ differ in one column (and its position within the matrix, but that only affects the sign): one has elements $B_{1,i}$ through $B_{k+l-1,i}$ and the other has elements $B_{1,j}$ through $B_{k+l-1,j}$. Each of these sets must contain a nonzero entry, otherwise $B_1$ or $B_2$ have an all-zero column. Pick a $p$ such that $B_{pi} = 1$. No other column between $1$ and $k$ can have a $1$ in the same row, so if this column does not appear in $B_1$ (because $i$ was left out), then the entry coming from $B_{pj}$ must compensate for that to keep both determinants nonzero. But now we have ones in
$$B_{k+l,i}, \quad B_{k+l,j}, \quad B_{pi}, \quad B_{pj}$$
and no other ones can appear in the same two rows, so there's a duplicate row in $B$ and in result $|B| = 0$.
I presume it is clear that this induction chain drawing $(k,l)$ from either $(k-1,l)$ or $(k,l-1)$ sooner or later hits one of the cases $(k,0)$ or $(0,l)$ in every branch.