Does every matrix which permutes an order of the basis vectors of a finite dimensional vector space have necessarily the form of a permutation matrix?

394 Views Asked by At

Let $V$ be a finite dimensional vector space and $(v_n)_{n\in J}$ the basis of $V$. By permutation of the order of the basis elements I mean that $Av_n = v_{\sigma(v)}$ for some $\sigma \in S_n$ where $S_n$ is the symmetric group on $n$ letters.

I understand that the question may sound a bit nonsensical: “If $A$ permutes the basis vectors then isn’t it by definition a permutation matrix in the sense of Wikipedia?”. What I mean is that it makes total sense to me that if the basis vectors are standard in the sense that $\pi_i(v_j) = \delta^i_j$ (with projection mapping $\pi$ onto the $j$th coordinate and Kronocker delta), then yes, $A$ must only contain zeroes and ones as per a standard permutation matrix. But what if the basis is not standard, i.e. the condition $\pi_i(v_j) = \delta^i_j$ does not hold? In that case assuming that the permutation property of $A$ holds, what do we know about its structure?

3

There are 3 best solutions below

2
On

If you multiply any square matrix $A$ by a permutation matrix $P$ of the same size, it's easy to see that the result simply permutes either the rows or columns of $A$, depending on whether you premultiply or postmultiply. So the answer to your question is "yes" whether or not a standard basis is used.

0
On

Firstly, I don't think it's ever accurate to state that a transformation is a matrix in the same way I don't think it's reasonable to call a numeral a number. One is a mathematical object and the other is the symbolic encoding we choose to represent it by. We use Arabic numerals and a positional number system notation because it has useful properties that help us do calculations but two will be prime in any other base, or even if we didn't use a position number system. The number has properties that are independent of how we choose to represent it. So I would encourage you to keep the geometric notion of the map clearly separate from the matrix that represents it.

In fact, this is where the name for representation theory comes from, as you study group behavior specifically by how the act on a vector space, which is exactly what you're doing here. One nice property, as you've noted, is we can always study $S_n$ over an $n$ dimensional space by choosing a basis and letting it act on that basis in the obvious way. What's important here is that the geometry of this change of basis looks very differently depending on which basis we choose. So the matrices don't change, they're just permutations of the identity matrix, but the geometry of what's happening to the space is very different depending on which basis you choose. I would encourage you to explore this over $\mathbb{R}^2$ by putting the basis in one quadrant, or distinct quadrants and seeing how the transformations differ despite having the same representation as matrices. This should help build some intuition as to why we the distinction is relevant.

0
On

Yes.

Let's call the "standard" basis that we use to write vectors and matrices with coordinates $(e_i)_i$. Then we can map the standard basis to the basis $(v_j)_j$ by $$ B = \Big( v_1 \quad v_2 \quad \dots \quad v_n \Big) $$ and onto the permuted basis by

$$ C = \Big( v_{\sigma(1)} \quad v_{\sigma(2)} \quad \dots \quad v_{\sigma(n)} \Big) $$

Then $v_{\sigma(i)} = Ce_i = CB^{-1}v_i$ and we can write $A$ as $A = CB^{-1}$. Since $B$ is invertable, $A$ is uniquely determind by the condition $AB = C$. So if the permutation matrix $P = (\delta_{i \sigma(i)})_{ij}$ fulfills this condition $PB = C$, then $A = P$.

But $C$ has the same columns as $B$ but in a different order. Therefore the permutation matrix fulfills this condition.