Suppose we have an $n \times n$ matrix and we know there are exactly $N$ zero entries. The determinant is the sum of $n!$ terms, each formed by selecting exactly one entry from each row and column and multiplying them. If a given term comes from selecting one or more zeroes then that term becomes zero. This raises the question of how many of the terms can be nonzero.
Intuitively I imagine the maximum numbers are achieved if we take a matrix of all $1$s and start putting in zeroes, starting with entries $(1,2), (1,3), \ldots, (1,n)$ so the only nonzero entry in that row is at $(1,1)$, and then moving to the next row and putting zeroes in $(2,1), (2,3), \ldots, (2,n)$ so the only nonzero entry in that row is at $(2,2)$, and so on until we're filling in zeroes on the last row at $(n,1), (n,2), \ldots, (n,n-1)$. At this stage we have the identity matrix and there is exactly one nonzero term.
I imagine this has already been proved somewhere but cannot find a good reference. Could someone please provide a proof, or better yet, a book where this is proved as part of some wider theory? Ideally by something more elegant than induction?
Yes, the maximum number of zeros that can occur without making the determinant zero is $N=n(n-1)$. If there are more than $n(n-1)$ zeros then there are fewer than $n$ non-zero entries and so there is at least one column that contains only zero entries, and so the determinant is zero.
A related question is how to maximize the number of non-zero terms in the determinant when $N=n$ i.e. when the $n \times n$ matrix contains exactly $n$ zero terms.
The value of each term in the determinant sum is the product of the entries along the main diagonal in one of the $n!$ matrices that result from a permutation of the columns (or rows) of the original matrix. This ignores a factor of $\pm 1$, but for the purposes of counting non-zero terms this is irrelevant.
If we have one zero in each row and in each column (e.g. the $n$ zeros lie along a diagonal) then a zero term will occur if the permutation of columns puts any column in the position where its zero is on the main diagonal. On the other hand, a non-zero term will occur if each column is in one of the other $n-1$ positions where its zero is off the main diagonal.
Of the $n!$ permutations of columns there are $!n$ (sub-factorial $n$) in which no zero lies on the main diagonal where
$!n = n! \sum_{i=0}^{i=n} \frac{(-1)^i}{i!}$
because this is the same as the counting the derangements of $n$.
I believe this maximises the number of non-zero terms and minimizes the number of zero terms. If two zeros lie on the same row or column then I think the number of zero terms increases.