Assume that A is a $3\times3$ matrix, where all entries are either $1$ or $-1$ and the rows are linearly independent.
Why is it the case that the determinant will always be $4$ or $-4$?
Assume that A is a $3\times3$ matrix, where all entries are either $1$ or $-1$ and the rows are linearly independent.
Why is it the case that the determinant will always be $4$ or $-4$?
On
Recall that the following row/column operations will only change the determinant by a negative sign:
Multiply a row/column by $-1$;
Swap two rows/columns.
From here on we only use these two operations.
Now let's start with an arbitrary $3\times3$ matrix with $\pm1$ entries. You make the necessary row multiplications by $-1$ to make the first column entries all $1$. The second column would have one or two entries, but not zero or three entries, being $-1$, due to assumption that the matrix has linearly independent rows (which is equivalent to having linearly independent columns). If it is two entries being $-1$, multiply second column by $-1$ so that it is just one entry being $-1$. Also swap rows so that it is the first row of second column being $-1$.
Now the third column can't have the second and third entries being both $1$ or both $-1$ or the columns are linearly dependent. They must be different. The first entry can be either $1$ or $-1$.
The matrix obtained looks like $$\begin{bmatrix}1&-1&a\\1&1&\pm1\\1&1&\mp1\end{bmatrix}$$ where $a$ is either $1$ or $-1$. You can solve for the determinant of this resulting matrix as $\pm4$. The initial matrix's determinant is the same of the last matrix's, or is the negative, because the row/column operations only change the determinant's sign.
On
Here is a slightly obscure proof, just for fun.
As all entries of $A$ are equal to $\pm1$, the diagonal entries of $A^TA$ are equal to $3$ and the off-diagonal entries of $A^TA$ are odd numbers between $-3$ and $3$. However, as $A$ is supposed to be nonsingular, $A^TA$ is positive definite. Thus its principal $2\times2$ submatrices are positive definite too. It follows that the off-diagonal entries of $A^TA$ cannot be equal to $\pm3$ and they can only be equal to $\pm1$. Therefore, by negating the appropriate rows of $A$ if necessary, we may assume that $$ A^TA=\pmatrix{3&1&1\\ 1&3&s\\ 1&s&3} $$ where $s=\pm1$. Hence $\det(A^TA)$ is equal to $20$ when $s=1$, and $16$ when $s=-1$. As $A$ is an integer matrix, $\det(A^TA)=\det(A)^2$ must be a perfect square. Therefore $\det(A)^2=16$ and $\det(A)=\pm4$.
On
HINT:
Add the first column to column $2$ and $3$. The determinant does not change. Now the new columns $2$ and $3$ are divisible by $2$. So the determinant is divisible by $4$.
The determinant has $6$ terms, each $\pm 1$. So it is between $-6$ and $6$.
On
Just for fun, this is the dum?best proof by exhaustion, to quickly check the stated fact.
For a small-sized matrices it is easy to enumerate all of them, using a spreadsheet.
For the $3\times 3$ case
we can put each of the matrix with $\pm1$ elements element-wise in a row of 9 elements,
$2^9=512$ rows in total, add a column with the formula of the determinant,
and then use countif() formula to find out that
along with 320 zero elements,
there are $96$ matrices with $\det=-4$
and the same amount for $\det=4$, $512$ in total,
that is, there are no any other values of determinants.
The table itself is very easy to build, similarly to the true table for a boolean function of $9$ elements, but using $-1$ instead of $0$, starting with the row of nine $-1$ entries.
Similarly, one can find out that for $4\times4$ matrices possible determinants are $\pm8$ and $\pm16$.
On
By hypothesis, $A$ has the form $$A=\begin{bmatrix} (-1)^a&(-1)^b&(-1)^c\\ (-1)^d&(-1)^e&(-1)^f\\ (-1)^g&(-1)^h&(-1)^i \end{bmatrix},$$ where $a,b,c,d,e,f,g,h,i\in\{0,1\}$. Therefore, a straightforward computation shows that $$\det A=(-1)^{a}\Big[(-1)^{e + i}+(-1)^{f + h+1}\Big] + (-1)^{b}\Big[(-1)^{f + g} +(-1)^{d + i+1}\Big]\\ +(-1)^{c}\Big[(-1)^{d + h} + (-1)^{e + g +1}\Big]$$
Let us write $$r=e+i,\quad s=f+h+1,\quad t=f+g,\quad u=d+i+1,\quad v=d+h,\quad w=e+g+1.$$
Then, $$w=r+t-s-u+v+3$$ and thus $$\det A=(-1)^{a}\underbrace{\Big[(-1)^{r}+(-1)^{s}\Big]}_{X} + (-1)^{b}\underbrace{\Big[(-1)^{t} +(-1)^{u}\Big]}_{Y} \\+(-1)^{c+v}\underbrace{\Big[1 + (-1)^{r+t-s-u+1}\Big]}_{Z} $$
From this equality, we conclude that:
Since $\det A\neq 0$ by hypothesis, it follows that $\det A =\pm 4$.
Since multiplying a row of $A$ by a constant $c$ changes the determinant from $\det(A)$ to $c\times\det(A)$, we may multiply rows by $\pm1$ and suppose that $$ A=\begin{pmatrix}1&a&b\\1&c&d\\1&e&f\end{pmatrix}. $$ Since subtracting a row by another row does not affect the determinant, we may change $A$ to be of the form $$ \begin{pmatrix}1&a&b\\0&c-a&d-b\\0&e-a&f-b\end{pmatrix}. $$ Noticing that we cannot have $c-a=e-a=0$, otherwise the rows are not linearly independent, we may assume that this is of the form $$ \begin{pmatrix}1&a&b\\0&\pm2&d-b\\0&e-a&f-b\end{pmatrix}. $$
Now if $e-a=0$, then $f-b\ne0$, so $f-b=\pm2$ and $\det(A)=\pm4$. If $e-a\ne0$, then $e-a=\pm2=\pm(c-a)$.
If $e-a=c-a$, then we may subtract the third row by the second row, and $A$ becomes $$ \begin{pmatrix}1&a&b\\0&\pm2&d-b\\0&0&f-d\end{pmatrix}. $$ Again this implies $\det(A)=\pm4$.
But $e-a$ cannot be equal to $-(c-a)$, since this means $e+c=2a$, which implies $e=c$ and hence $e-a=c-a$, a contradiction.
If there is any error or confusion, please let me know. Thanks in advance.