Swapping rows or columns of Toeplitz matrix changes sign of one eigenvalue

1.4k Views Asked by At

Given some arbitrary Toeplitz matrix, if I swap two rows, one of the eigenvalues change its sign. For example,

$$X = \begin{bmatrix} A & B & C \\ D & A & B \\ E & D & A \end{bmatrix}$$

and

$$Y = \begin{bmatrix} D & A & B \\ A & B & C \\ E & D & A \end{bmatrix}$$

have the same eigenvalues up to sign. I can see this for small examples, but how would I go about proving this?

In particular, if I flip the matrix upside down or left-side-right, half (rounding down) of the eigenvalues flip sign, and the singular values are the same. (It becomes a Hankel matrix.)

Numerical demonstration with MATLAB code:

d = 5;
X = toeplitz(randn(d,1));
   -0.8655   -0.1765    0.7914   -1.3320   -2.3299
   -0.1765   -0.8655   -0.1765    0.7914   -1.3320
    0.7914   -0.1765   -0.8655   -0.1765    0.7914
   -1.3320    0.7914   -0.1765   -0.8655   -0.1765
   -2.3299   -1.3320    0.7914   -0.1765   -0.8655

J = flipud(eye(d));
   0     0     0     0     1
   0     0     0     1     0
   0     0     1     0     0
   0     1     0     0     0
   1     0     0     0     0

svd(X)'
    4.0897    2.0381    1.8456    0.8649    0.8198

svd(X*J)'
    4.0897    2.0381    1.8456    0.8649    0.8198

eig(X)'
   -4.0897   -2.0381   -0.8649    0.8198    1.8456

eig(X*J)'
   -4.0897   -1.8456   -0.8649    0.8198    2.0381

eig(J*X)'
   -4.0897   -1.8456   -0.8649    0.8198    2.0381

EDIT: non-symmetric toeplitz example

2

There are 2 best solutions below

0
On BEST ANSWER

What you claim is not necessarily true if you swap two rows arbitrarily. It is easy to generate a random counterexample by computer. However, if $J$ is the backward identity matrix (as shown in your MATLAB code), then the eigenvalues of $JA$ (or $AJ$, which is similar to $JA$) and $A$ are identical except that $\lfloor \frac n2\rfloor$ pairs of them have different signs.

The essential reason behind this phenomenon is that a symmetric Toeplitz matrix has an eigenbasis in which $\lceil \frac n2\rceil$ eigenvectors $v$ are "symmetric" (i.e. $Jv=v$) and the remaining $\lfloor \frac n2\rfloor$ are "skew-symmetric" ($Jv=-v$).

So, if $v$ is a symmetric eigenvector corresponding to an eigenvalue $\lambda$, we have $JAv=J(\lambda v)=\lambda Jv=\lambda v$, i.e. $\lambda$ is also an eigenvalue of $JA$.

In contrast, if $v$ is a skew-symmetric eigenvector corresponding to some eigenvalue $\lambda$, then $JAv=J(\lambda v)=\lambda Jv=-\lambda v$, i.e. the sign of the eigenvalue is flipped in $JA$.

3
On

It turns out that your observation about the eigenvalues of a real symmetric Toeplitz matrix being the same up to sign following a full column or row reversal can be generalized quite nicely. To start with, note that a full row reversal of a matrix $A$ is obtained by pre-multiplying by the exchange matrix $J$ (also called the backward identity) while a full column reversal of $A$ is obtained by post-multiplication by the matrix $J$.

Now observe that a real symmetric Toeplitz matrix $A$ satisfies the commutation relation $JA = AJ$. That is, the matrix $A$ belongs to the class of centrosymmetric matrices. So let $A$ be any real diagonalizable centrosymmetric matrix. Then $J$ and $A$ can be simultaneously diagonalized to block matrices of the form

$J' = \left( {\begin{array}{*{20}c} I & {} \\ {} & { - I} \\\end{array}} \right)$ and $A' = \left( {\begin{array}{*{20}c} {A'_{11} } & {} \\ {} & {A'_{22} } \\\end{array}} \right) $. Since $J'A' = \left( {\begin{array}{*{20}c} {A'_{11} } & {} \\ {} & { - A'_{22} } \\\end{array}} \right),$ the result follows: the matrices $A$ and $JA$ have the same eigenvalues up to sign. That’s the first trivial generalization (from real symmetric Toeplitz matrices to real diagonalizable centrosymmetric matrices).

Then, notice that we can play the same game with any diagonalizable complex matrix $A$ satisfying $KA=AK$ for a complex involutory matrix $K$ (involutory meaning $K^2=I$) since the minimal polynomial of $K$ has roots $\pm 1$ (this includes the cases $K=\pm I$). That is, if we have a commutation relation $KA=AK$ and $K^2=I$, then pre (or post) multiplication of $A$ by an involutory $K$ results in a matrix $KA$ (or $AK$) whose eigenvalues are the same as those of $A$ up to sign. That’s a second generalization (also trivial), to complex diagonalizable matrices $A$ and where $J$ has simply been replaced by an involutory matrix $K$.

To illustrate, here's a simple example of a commuting pair $A$ and $K$ where $K$ is involutory:$$A = \left( {\begin{array}{*{20}c} 2 & -1 & -2 \\ -1 & 2 & -2 \\ -2 & -2 & -1 \\ \end{array}} \right), K = \left( {\begin{array}{*{20}c} {\frac{2}{3}} & {\frac{{-1}}{3}} & {\frac{{-2}}{3}} \\ {\frac{{-1}}{3}} & {\frac{2}{3}} & {\frac{{-2}}{3}} \\ {\frac{{-2}}{3}} & {\frac{{- 2}}{3}} & {\frac{{-1}}{3}} \\ \end{array}}\right)$$ You can confirm that $A$, $KA$, and $AK$ all have the same multiset of eigenvalues up to sign.

Nothing surprising so far. What might be unexpected, however, is that the reverse implication also holds when $A$ and $K$ are Hermitian. That is, suppose we have a Hermitian matrix $A$ and a Hermitian involutory $K$ such that $A$ and $KA$ have the same eigenvalues up to sign. Then it turns out that $A$ and $K$ must multiplicatively commute! This is an elementary but somewhat disguised consequence of the fact that the singular values of a matrix are equal to its eigenvalues up to sign if and only if the matrix is normal. If you’re interested in seeing how this works out, you can look at the paper "A spectral characterization of hermitian centrosymmetric and hermitian skew-centrosymmetric K-matrices" in SIAM. J. Matrix Anal. & Appl., 25(3), 601–605. The core of the proof is carried out in the context of self-adjoint compact linear operators in a complex Hilbert space and the result for Hermitian matrices is an immediate corollary. In the real case where $K=J$, one could say that this gives a sort of “spectral characterization” for the class of bisymmetric matrices (i.e., real symmetric centrosymmetric matrices).

I’ll mention briefly that there is an analogous set of results when $A$ and $K$ anti-commute. So, the story can be taken a bit further, and in other directions . . . but I’ll stop here since perhaps this is already beyond the scope of the original question.