Why does this determinant computation fail?

88 Views Asked by At

Let

$$A=\begin{bmatrix} &&&&&1\\ &&&&1&\\ &&&1&&\\ &&1&&&\\ &1&&&&\\ 1&&&&&\\ \end{bmatrix}$$

with understood zeroes. We wish to calculate the determinant of A.

Approach 1. Do cofactor expansion on the first row. Most minor determinants are null, yielding $\det A=(-1)(1)(-1)(1)(-1)(1)=-1$.

Approach 2. Apply three column swaps. First, swap column 6 and 1, then columns 5 and 2, and finally columns 3 and 4. Each one changes the determinant by $(-1)$, so it is changed by $(-1)^3=-1$ overall. The resulting matrix is $\textit{Id}$ so $\det A=(-1)(1)=-1$.

I know that Approach 1 is correct but I do not know how approach 2 arrives at the incorrect result. What am I missing?

2

There are 2 best solutions below

0
On BEST ANSWER

The second approch is correct and agrees with the first one, if you realise that $(-1)^3=-1$.

2
On

No, what you did in approach 1 isn't correct. That product there corresponds to the product

$$a_{16}a_{25}a_{34}a_{43}a_{52}a_{61}$$

which corresponds to the permutation

$$\begin{pmatrix}1&2&3&4&5&6\\ 6&5&4&3&2&1\end{pmatrix}=(16)(25)(34)$$

and as a product of three (odd number of) transpositions, its sign is minus one ($-1$)