Permute columns by pre-multiplying and rows by post-multiplying?

894 Views Asked by At

I was looking at Gilbert Strang's lectures on Linear Algebra and noticed that in lecture 2, Elimination with Matrices, around the 40nth minute he mentions that you can use the permutation matrix, $$P= \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} $$ so that $AP$ is a permutation of $A$'s columns and $PA$ is a permutation of $A$'s rows.

I was wondering if there exists a matrix $P'$ such that $P'A$ is a permutation of $A$'s columns and $AP'$ is a permutation of $A$'s rows.

  1. How to prove there is no $P'$ such that $P'A=AP$ and $AP'=PA$ for all $n\times n$ matrices?
  2. For which matrices $A$ there is such a $P'$?

Tried the $2\times 2$ case

$L$ pre-multiplies $A$ and permutes its columns, $$ \begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix} \begin{bmatrix} a & b \\ c & d \end{bmatrix}=\begin{bmatrix} b & a \\ d & c \end{bmatrix} \Rightarrow \begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix}=\frac{1}{ad-bc}\begin{bmatrix} bd-ac & a^2-b^2 \\ d^2-c^2 & ac-bd \end{bmatrix}=L $$

$R$ post-multiplies $A$ and permutes its rows,

$$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix}= \begin{bmatrix} c & d \\ a & b \end{bmatrix} \Rightarrow \begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix}=\frac{1}{ad-bc}\begin{bmatrix} cd-ab & d^2-b^2 \\ a^2-c^2 & ab-cd \end{bmatrix}=R $$

  • $L\neq R$ in the general case
  • $det(A)\neq 0$ for $R$ and $L$ to exist
  • if $det(A)\neq 0$ and $a=d=0$, then $P'=L=R$

Additional notes

I feel I have to clarify further. I was looking for a matrix $P'$ that behaves just like $P$ but from the opposite side, meaning $P'$ permutes columns when pre-multiplying $A$ and rows when post-multiplying $A$.

4

There are 4 best solutions below

0
On BEST ANSWER

If there is an ($n\times n$) matrix $P’$ such that $P'A=AP$ and $AP'=PA$ then $PA^2=AP’A=A^2P$. The latter holds if $A^2$ is a linear combination of powers of $P$. Moreover, by equivalences (I) $\Leftrightarrow$ (IV) and (I) $\Leftrightarrow$(V) of Theorem from Will Jagy’s answer, the set of matrices commuting with $P$ coincides with the set of linear combinations of powers of $P$ iff the permutation represented by $P$ is a cycle. Conversely, if $PA^2=A^2P$ and $\det A\ne 0$ we have $P'A=AP$ and $AP'=PA$ for $P’=APA^{-1}$.

3
On

Let's start with the case of $PA$. Let $\pi$ be the permutation function which generates $P=(p_{ij})$. By this, I mean $p_{ij}=1$ if and only if $\pi(j)=i$. Next, note that $$ (PA)_{ij}=\sum_{k}p_{ik}a_{kj}=a_{\pi(j)j}. $$ That is, $PA$ is exactly the matrix which has its rows permuted according to $\pi$. Therefore, for any dimensions $n\geq2$, picking $A$ such that all of its entries are distinct yields the desired result (i.e., no permutation matrix $P$ exists such that $PA$ is a permutation of the columns of $A$ for some $A$).

The argument symmetric for $AP$.

3
On

$\newcommand\bm\boldsymbol$

If such $\bm P'$ works for all matrices, then for all $\bm A$, $$ \bm {AP} = \bm {P'A}, $$ then specifically it works for the identity matrix $\bm I$, i.e. $$ \bm {IP} = \bm {P'I}, $$ then the only candidate of $\bm P'$ is $\bm P$ again. But clearly $$ \bm {PA} = \bm {AP} $$ only holds for specific matrices $\bm A$.

Conclusion: maybe for some $\bm A$, there exists $\bm P'$ that $\bm {P'A}$ swap two columns of $\bm A$, but there exists no universal matrices $\bm P'$.

0
On

This is a sketch of the answer to question 2.

Let $\mathcal M$ be the space of $n \times n$ matrices. We may write a generic element $M \in \mathcal M$ as

$$ \newcommand \mb { \begin{bmatrix} } \newcommand \me { \end{bmatrix} } \newcommand \md { \vdots & \ddots & \vdots \\ } \newcommand \mr [1] { m_{#1,1} & \dots & m_{#1,n} \\ } M = \mb \mr 1 \md \mr n \me $$

Let $X = \{ 1 \dots n \}$, and let $\alpha, \beta : X \to X$ be any two permutations. We may represent them as linear transformations $\alpha, \beta : \mathcal M \to \mathcal M$, where $\alpha$ permutes the rows of its argument, and $\beta$ permutes the columns of its argument. That is,

$$ \newcommand \rp [1] { m_{\alpha(#1), 1} & \dots & m_{\alpha(#1), n} \\ } \newcommand \cp [1] { m_{#1, \beta(1)} & \dots & m_{#1, \beta(n)} \\ } \alpha(M) = \mb \rp 1 \md \rp n \me, \qquad \beta (M) = \mb \cp 1 \md \cp n \me $$

We want to find the matrices $M \in \ker {(\alpha - \beta)}$. Working elementwise, we want $m_{\alpha(i), j} = m_{i, \beta(j)}$ for every $i, j \in X$. This is a linear homogeneous system of $n \times n$ equations on $n \times n$ variables.

Note that the entries of $M$ are indexed by $X^2$. Define the equivalence relation $\sim_{\alpha, \beta}$ on $X^2$, where $(i,j) \sim_{\alpha, \beta} (k,l)$ if and only if the equation $m_{ij} = m_{kl}$ can be deduced from the system. Let $S = X^2 / \sim_{\alpha, \beta}$ be the set of equivalence classes of $\sim_{\alpha, \beta}$. Then, $M \in \ker {(\alpha - \beta)}$ if and only if there exist scalars $\{ w_s \}_ {s \in S}$ such that $m_{ij} = w_{\overline {(i,j)}}$ for every $i, j \in X$.

Actually computing $\sim_{\alpha, \beta}$ is a combinatorial problem, outside of my field of expertise.