Can 6x6 determinants be expanded into sets of smaller determinants by treating each 2x2 "block" as one term of a 3x3 determinant?

1.2k Views Asked by At

This is what I'm trying to do:

I'm unsure if determinants can actually be expanded in this way. If it were a 3x3, I would expand it into three 2x2 determinants with the terms of the first rows as constants in front of each one. Does the same principle apply here?

2

There are 2 best solutions below

0
On

No, this doesn’t work. Two ways to see this:

The determinant has one term for each permutation of the indices. That implies that it’s $\pm1$ for every permutation matrix. Your expansion is zero for a permutation matrix that’s diagonal except $a_{23}=a_{32}=1$ and $a_{22}=a_{33}=0$.

Or: the determinant is the unique function from the set of matrices to the underlying field that is linear in each column, alternating in the columns (i.e. $0$ if two columns are identical), and yields $1$ for the identity. Your expansion doesn’t yield $0$ e.g. for a matrix with $1$s on the diagonal and $a_{23}=a_{32}=1$, even though the second and third columns are the same.

0
On

Welcome to Math StackExchange!

There is a way to do this, but not necessarily what you're thinking...

For a $4 \times 4$ matrix:

$$ \begin{bmatrix} a_{1,1} & a_{1,2} & a_{1,3} & a_{1,4} \\ a_{2,1} & a_{2,2} & a_{2,3} & a_{2,4} \\ a_{3,1} & a_{3,2} & a_{3,3} & a_{3,4} \\ a_{4,1} & a_{4,2} & a_{4,3} & a_{4,4} \\ \end{bmatrix} = [a_{1,1}] \begin{bmatrix} a_{2,2} & a_{2,3} & a_{2,4} \\ a_{3,2} & a_{3,3} & a_{3,4} \\ a_{4,2} & a_{4,3} & a_{4,4} \\ \end{bmatrix} - [a_{1,2}] \begin{bmatrix} a_{2,1} & a_{2,3} & a_{2,4} \\ a_{3,1} & a_{3,3} & a_{3,4} \\ a_{4,1} & a_{4,3} & a_{4,4} \\ \end{bmatrix} + [a_{1,3}] \begin{bmatrix} a_{2,1} & a_{2,2} & a_{2,4} \\ a_{3,1} & a_{3,2} & a_{3,4} \\ a_{4,1} & a_{4,2} & a_{4,4} \\ \end{bmatrix} - [a_{1,4}] \begin{bmatrix} a_{2,1} & a_{2,2} & a_{2,3} \\ a_{3,1} & a_{3,2} & a_{3,3} \\ a_{4,1} & a_{4,2} & a_{4,3} \\ \end{bmatrix} $$

This you probably knew already. The way to expand a $4 \times 4$ matrix as $2 \times 2$ matrices is to go through every possible combination of two columns at a time:

$$ \begin{bmatrix} a_{1,1} & a_{1,2} & a_{1,3} & a_{1,4} \\ a_{2,1} & a_{2,2} & a_{2,3} & a_{2,4} \\ a_{3,1} & a_{3,2} & a_{3,3} & a_{3,4} \\ a_{4,1} & a_{4,2} & a_{4,3} & a_{4,4} \\ \end{bmatrix} = \begin{bmatrix} a_{1,1} & a_{1,2} \\ a_{2,1} & a_{2,2} \\ \end{bmatrix} \begin{bmatrix} a_{3,3} & a_{3,4} \\ a_{4,3} & a_{4,4} \\ \end{bmatrix} - \begin{bmatrix} a_{1,1} & a_{1,3} \\ a_{2,1} & a_{2,3} \\ \end{bmatrix} \begin{bmatrix} a_{3,2} & a_{3,4} \\ a_{4,2} & a_{4,4} \\ \end{bmatrix} + \begin{bmatrix} a_{1,1} & a_{1,4} \\ a_{2,1} & a_{2,4} \\ \end{bmatrix} \begin{bmatrix} a_{3,2} & a_{3,3} \\ a_{4,2} & a_{4,3} \\ \end{bmatrix} + \begin{bmatrix} a_{1,2} & a_{1,3} \\ a_{2,2} & a_{2,3} \\ \end{bmatrix} \begin{bmatrix} a_{3,1} & a_{3,4} \\ a_{4,1} & a_{4,4} \\ \end{bmatrix} - \begin{bmatrix} a_{1,2} & a_{1,4} \\ a_{2,2} & a_{2,4} \\ \end{bmatrix} \begin{bmatrix} a_{3,1} & a_{3,3} \\ a_{4,1} & a_{4,3} \\ \end{bmatrix} + \begin{bmatrix} a_{1,3} & a_{1,4} \\ a_{2,3} & a_{2,4} \\ \end{bmatrix} \begin{bmatrix} a_{3,1} & a_{3,2} \\ a_{4,1} & a_{4,2} \\ \end{bmatrix} $$

...In other words, there are 6 ways to pick two columns at a time. We can pick columns 1 & 2, columns 1 & 3, columns 1 & 4, columns 2 & 3, columns 2 & 4, columns 3 & 4. If you look at the $2 \times 2$ matrices above, you will see that the first matrices use these combinations of columns, and the last two matrices use the remaining columns. Then, the first $2 \times 2$ matrices always use the first two rows, and the second $2 \times 2$ matrices always use the last 2 rows. The $\pm$ combinations arise from the formula $(-1)^{\text{column1} + \text{column 2}+1}$.

As a kind of pseudo-proof, consider breaking up the matrix twice:

$$\begin{bmatrix} a_{1,1} & a_{1,2} & a_{1,3} & a_{1,4} \\ a_{2,1} & a_{2,2} & a_{2,3} & a_{2,4} \\ a_{3,1} & a_{3,2} & a_{3,3} & a_{3,4} \\ a_{4,1} & a_{4,2} & a_{4,3} & a_{4,4} \\ \end{bmatrix} = $$ $$[a_{1,1}] \begin{bmatrix} a_{2,2} & a_{2,3} & a_{2,4} \\ a_{3,2} & a_{3,3} & a_{3,4} \\ a_{4,2} & a_{4,3} & a_{4,4} \\ \end{bmatrix} + \text{stuff} = $$ $$[a_{1,1}]\left( [a_{2,2}] \begin{bmatrix} a_{3,3} & a_{3,4} \\ a_{4,3} & a_{4,4} \\ \end{bmatrix} -[a_{2,3}] \begin{bmatrix} a_{3,2} & a_{3,4} \\ a_{4,2} & a_{4,4} \\ \end{bmatrix} + [a_{2,4}] \begin{bmatrix} a_{3,2} & a_{3,3} \\ a_{4,2} & a_{4,3} \\ \end{bmatrix}\right) + \text{stuff}$$

...combine the singleton terms together, and you will find that they create the formula above.

If you are interested in the time it takes to compute the determinant, we have, from Wikipedia's entry on determinants -> Calculation that it takes time proportional to matrix multiplication, and I can vouch for that statistic.