The transpose of a permutation matrix is its inverse.

71.9k Views Asked by At

This is a question from the free Harvard online abstract algebra lectures. I'm posting my solutions here to get some feedback on them. For a fuller explanation, see this post.

This problem is from assignment 4.

Prove that the transpose of a permutation matrix $P$ is its inverse.

A permutation matrix $P$ has a single 1 in each row and a single 1 in each column, all other entries being 0. So column $j$ has a single 1 at position $e_{i_jj}$. $P$ acts by moving row $j$ to row $i_j$ for each column $j$. Taking the transpose of $P$ moves each 1 entry from $e_{i_jj}$ to $e_{ji_j}$. Then $P^t$ acts by moving row $i_j$ to row $j$ for each row $i_j$. Since this is the inverse operation, $P^t=P^{-1}$.

Again, I welcome any critique of my reasoning and/or my style as well as alternative solutions to the problem.

Thanks.

4

There are 4 best solutions below

1
On BEST ANSWER

A direct computation is also fine: $$(PP^T)_{ij} = \sum_{k=1}^n P_{ik} P^T_{kj} = \sum_{k=1}^n P_{ik} P_{jk}$$ but $P_{ik}$ is usually 0, and so $P_{ik} P_{jk}$ is usually 0. The only time $P_{ik}$ is nonzero is when it is 1, but then there are no other $i' \neq i$ such that $P_{i'k}$ is nonzero ($i$ is the only row with a 1 in column $k$). In other words, $$\sum_{k=1}^n P_{ik} P_{jk} = \begin{cases} 1 & \text{if } i = j \\ 0 & \text{otherwise} \end{cases}$$ and this is exactly the formula for the entries of the identity matrix, so $$PP^T = I$$

1
On

Less sophisticated, you could just crunch it out.

First, a lemma:

The inverse of a matrix, if it exists, is unique.

Proof: If both $B$ and $C$ are inverse to $A$, then we have $B = BI = B(AC) = (BA)C = IC = C$ so $B = C$. (Here, $I$ denotes the identity matrix).

Using this, it follows in our specific case that in order to show $A^T = A^{-1}$, we need only show $A^TA = AA^T = I$.

Assume $i\neq j$. Then $(AA^T)_{ij} = \sum_k A_{ik}A^T_{kj} = \sum_k A_{ik}A_{jk}$. But for each $k$, $A_{ik}A_{jk} = 0$ since there is only one nonzero entry in the $k$th row and $i\neq j$ (so $A_{ik}$ and $A_{jk}$ can't both be the nonzero entry). So, $(AA^T)_{ij} = 0$ when $i\neq j$.

The argument that $(A^TA)_{ij} = 0$ when $i\neq j$ is almost identical, but uses the fact that the columns of $A$ contain only one nonzero entry.

Can you see what happens when, instead, $i = j$?

0
On

Using a little knowledge about orthogonal matrices the following proof is pretty simple:

Since $v^tw=\sum_{k=0}^nv_iw_i$ if $v=(v_1,...,v_n),w=(w_1,...,w_n)$ we have $v^tv=1$ whenever v is a column of $P$. On the other hand $v^tw=0$ if $v$ and $w$ are two distinct columns of $P$. Therefore we can conclude that $(P^tP)_{i,j}=\delta_{i,j}$ and so $P^t=P^{-1}$.

0
On

Let's P be an arbitrary permutation matrix. Some matrix is unitary iff their columns form a orthonormal base. Since the columns of a permutation matrix are distinct vectors of standard basis, it follows that P is unitary matrix.