Finding ordered basis such that $[T]_\beta$ is a diagonal matrix

2.3k Views Asked by At

Let T be the linear operator on $M_{n \times n }(R)$ defined by $T(A)=A^{t}$.

1). Find an ordered basis $\beta$ for $M_{2 \times 2}(R)$ such that $[T]_\beta$ is a diagonal matrix.

2). Find an ordered basis $\beta$ for $M_{n \times n}(R)$ such that $[T]_\beta$ is a diagonal matrix for n>2.

The theorem I can think of is

Theorem :linear operator T on a finite-dimensional vector space V is diagonalizable if and only if there exists an ordered basis β for V consisting of eigenvectors of T. Furthermore, if T is diagonalizable, β = {$v_1 , v_2 , . . . , v_n$} is an ordered basis of eigenvectors of T, and $D = [T]_\beta$, then D is a diagonal matrix and $D_{jj}$ is the eigenvalue corresponding to $v_j$ for $1 ≤ j ≤ n$.

Question: I am still not sure how to tackle those problems, any thought?

2

There are 2 best solutions below

0
On BEST ANSWER

First, it is clear that if $A=A^T$ then $TA = A $ and so any symmetric matrix is an eigenvector corresponding to $\lambda = 1$.

A basis for the symmetric matrices is $E_{kk}, E_{ij}+E_{ji}$ with $i \neq j$ hence the eigenspace corresponding to $\lambda=1$ has dimension ${1 \over 2} n (n+1)$.

This suggests that the skew symmetric matrices are worth looking at to complete the set of eigenvectors.

If $A=-A^T$ then we see that any skew symmetric matrix is an eigenvector corresponding to $\lambda = -1$. A basis for the skew symmetric matrices is $E_{ij}-E_{ji}$ with $i \neq j$. Since there are ${1 \over 2} n (n-1)$ of them we have a basis for the space of matrices.

Aside: We can compute the characteristic equation by $\det (\lambda I -T)$, but this is messy. If we can find a monic polynomial $p$ of lowest degree such that $p(T) = 0$ (this is called the minimal polynomial and you can show that it must divide the characteristic polynomial), then all of the eigenvalues of $T$ are roots of $p$ and vice versa (not counting multiplicities).

In this case we can check that with $p(x) = x^2-1$ we have $p(T) = 0$ and since $T$ is not a multiple of the identity, it must be the smallest degree such polynomial. Since the roots are $\pm 1$, we know that the eigenvalues are $\pm 1$.

The above shows that the characteristic polynomial is $\chi_T(x) = (x-1)^{{1 \over 2} n (n+1)} (x+1)^{{1 \over 2} n (n-1)}$, I am glad that I did not work it out explicitly :-).

1
On

Start with the first part. As you said, the eigenvalues are $\lambda = \pm 1$. I'm not 100% clear on how you've gone about this, but even if this was just a guess, we will vindicate this in the end.

Let's first look for the eigenvectors corresponding to $\lambda = -1$. We wish to solve the equation $$T(A) = A^\top = (-1)A,$$ where $A$ is a $2 \times 2$ real matrix. Matrices $A$ in $M_{2 \times 2}(\Bbb{R})$ take the form $$A = \begin{pmatrix} a & b \\ c & d\end{pmatrix}.$$ Then, we are solving $$\begin{pmatrix} a & c \\ b & d\end{pmatrix} = \begin{pmatrix} -a & -b \\ -c & -d\end{pmatrix}.$$ By equating entries, \begin{align*} a &= -a \\ c &= -b \\ b &= -c \\ d &= -d. \end{align*} This is now a system of linear equations. The first and fourth equations imply that $a = 0$ and $d= 0$, while the second and third equations contend the same thing: $c = -b$. Thus, our eigenvector must take the following form: $$A = \begin{pmatrix} 0 & -b \\ b & 0\end{pmatrix},$$ where $b \in \Bbb{R}$. Please verify that $T(A) = -A$, as required, so $A$ is definitely an eigenvector for eigenvalue $-1$, so long as $A \neq 0$. That is, so long as $b \neq 0$. As such, our eigenspace is spanned by a single vector: $$\begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix}.$$


We can do the same thing for the other eigenvalue. We're now solving $$\begin{pmatrix} a & c \\ b & d\end{pmatrix} = \begin{pmatrix} a & b \\ c & d\end{pmatrix}.$$ By equating entries, \begin{align*} a &= a \\ c &= b \\ b &= c \\ d &= d. \end{align*} Now, the first and fourth equations are tautological, and can be ignored. The the second and third terms tell us the same thing again: $c = b$. Thus, our eigenvectors take the form, \begin{align*} A &= \begin{pmatrix} a & b \\ b & d\end{pmatrix} \\ &= a \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + b \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} + d \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}. \end{align*} Thus, $A$ must be a linear combination of the above three matrices. Verify that they are eigenvectors, that they're linearly independent, and hence conclude that they form a basis for the eigenspace.

So, we can form an eigenbasis $$\left\{\begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix}, \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}\right\}.$$ Note that if we were missing any eigenvalues, we wouldn't have four linearly independent eigenvectors, so indeed, $-1$ and $+1$ are the only two eigenvalues.

That's it for part 1). For part 2), I would suggest thinking about how this generalises. Our eigenbasis from the previous part consisted of matrices with a single $1$ in the diagonal (and $0$s elsewhere), as well as a matrix with two symmetric off-diagonal $1$s, and another matrix where these $1$s had different signs. Think about how you'd generalise this to more dimensions.