The result of two change of basis is the same. Then the bases must be the same?

57 Views Asked by At

Let $A,P,Q$ be square matrices where $P$ and $Q$ are invertible and $A \neq I$. Suppose $$ Q^{-1} A Q = P^{-1} A P \neq 0. $$ Can we conclude that $P = Q$? My intuition says that this is correct, though I wasn't able to prove it.

If necessary, we can assume that $A$ is invertible as well.

Of course, we have $$ A = (QP^{-1}) A (PQ^{-1}) $$ so, putting $Z = PQ^{-1}$, we have that $A$ is similar to itself via $Z$: $A = Z^{-1} A Z$. But does this imply $Z = I$?

EDIT: I actually just found a counter example where $A$ is diagonal and $Z$ is a permutation with $Z = Z^{-1}$. So I guess my next question is under what conditions do we have $Z = I$? Thanks.

EDIT: It seems from the comments that it can not be guaranteed that $Z$ is the identity. In that case, I'm interested in what can be said about $Z$ at all. It would seem (also from the comments) that, when $A$ is diagonalizable, then $Z$ is always a matrix of eigenvectors, and thus $Q$ and $P$ are any valid decomposition thereof. But is this the only case? And what if $A$ is not diagonalizable? Thanks.

1

There are 1 best solutions below

0
On

I think your exact question is now a bit unclear given this discussion in the comments. That said, for a fixed $n \times n$ matrix $A$ maybe your question is to classify all $P$ and $Q$ such that $P A P^{-1} = Q A Q^{-1}$.

Let me at least perform a few basic reductions of your problem: as you observed this is the same as the question $A = P^{-1} Q A (P^{-1} Q)^{-1}$, and setting $Z = P^{-1} Q$ the question is then equivalent to classifying the matrices $Z$ such that $A = Z A Z^{-1}$. (Since if $Z$ is any such matrix and $R$ is any invertible matrix at all then $R A R^{-1} = (R Z) A (R Z)^{-1}$, and conversely if $P A P^{-1} = Q A Q^{-1}$ then necessarily $Q = P Z$ for some such matrix $Z$.)

But for a matrix $Z$ the condition $A = Z A Z^{-1}$ is the same as $A Z = Z A$, and the set of all such $Z$ for fixed $A$ has a name: the centralizer $C_A$ of $A$ in $\operatorname{GL}_n$. Obviously if $A Z = Z A$ then $(P A P^{-1}) (P Z P^{-1}) = (P Z P^{-1}) (P A P^{-1})$, i.e. conjugate matrices have centralizers conjugate by the same matrix.

This means that it's enough to pin down the centralizer of a representative of each matrix conjugacy class in order to understand the whole situation. In order to do this, recall that every matrix $A$ is conjugate to a Jordan canonical form $J$, which is a direct sum of Jordan block matrices. Finally, recall that if matrices $A$ and $Z$ commute then $Z$ must preserve the generalized eigenspaces of $A$. This means that any $Z$ commuting with $A$ must split as a direct sum of matrices on the generalised eigenspaces of $A$---in other words, we may assume that $A$ has only one eigenvalue $\lambda$ and is in Jordan canonical form.

The possible $Z$ which commute with such $A$ are then constrained by the requirement that they must send any $k$-generalized eigenvector $v$ for $A$ (i.e. such that $(A - \lambda I)^k v = 0$) to a $k'$-generalised eigenvector with $k' \leq k$.