Why do the properties of determinants (used to calculate determinants from multiple matrices) apply not only to rows, but to columns as well?

719 Views Asked by At

A set of rules in my textbook is as follows:

a. If $A$ has a zero row (column), then $\det A = 0$

b. If $B$ is obtained by interchanging two rows (columns) of $A$, then $\det B = -\det A$

c. If $A$ has two identical rows (columns), then $\det A = 0$

d. If $B$ is obtained by multiplying a row (column) of $A$ by $k$, then $\det B = k\cdot\det A$

e. If $A$, $B$, and $C$ are identical except that the $i$-th row (column) of $C$ is the sum of the $i$-th rows (columns) of $A$ and $B$, then $\det C = \det B + \det A$

f. If $B$ is obtained by adding a multiple of one row (column) of $A$ to another row (column), then $\det B = \det A$

I don't understand why "column" is in parentheses after every instance of row. Is that the same as saying, for example with clause a: "if $A$ has a zero row or a zero column, then $\det A = 0$"? As in, it's saying that column and row can be used interchangably in the statement, as the statement holds true either way?

If the above interpretation is correct, then how? Why would these statements that apply to rows also apply to columns. The only situation I could see it applying is if the matrix is symmetrical, but the question doesn't specify that, it only says that the matrix is square.

Any help is appreciated.

2

There are 2 best solutions below

3
On

The trick is that $\det A^T=\det A$. Therefore, if there's a zero column in $A$ there's a zero row in $A^T$, implying $\det A^T=0$. You can check all the other copy-paste-to-column results in that way.

0
On

That's because your textbook isn't really explaining determinants at all to give you any intuition. It's just listing a bunch of their properties.

The key thing to remember is that $\det A$ is the volumetric scaling factor of a matrix $A$ (i.e., $A$ scales the unit cube's volume by $\det A$).
If you've seen eigenvalues, this means that $\det A$ is the product of $A$'s eigenvalues. (This can be a little mindblowing the first time you realize it, if you've already learned linear algebra for some time.)

If you start from this, then it's far easier to see why rows and columns don't make a difference:

  • The unit cube is represented by its orthogonal edges via $I$, the identity matrix.

  • Right-multiplying by $I$ (which is an operation linearly combining $A$'s columns) gives the same result ($A$) as as left-multiplying by $I$ (which is an operation linearly combining $A$'s rows)

The fact that we have the same output solid regardless of whether we operate on rows or columns of $A$ means that the scaling factor is the same, and hence the determinant is the same.