Linear algebra: diagonalisation of antisymmetrisation

51 Views Asked by At

I'm facing an apparent contradiction when trying to solve a linear algebra exercise. I am asked to find a basis for the vector space of $2\times 2$ matrices such that the function $$f(A) = A - A^t $$

is represented by the matrix

$$\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array} \right) $$

which would mean that some matrices are eigenvectors with eigenvalue $1$ for $f$, which seems to be impossible.

2

There are 2 best solutions below

0
On BEST ANSWER

You should redefine $f$ as

$$f(A)=\frac{1}{2}(A-A^t).$$

This is the map that is usually called anti-symmetrization, and solving the exercise should tell you why!

0
On

Notice that

$$f(A)=A-A^t\implies f^2(A)=f(A)-f(A)^t=A-A^t-A^t+A=2A+2(f(A)-A)$$ hence we get

$$f(A)^2=2f(A)$$ hence the polynomial $x(x-2)$ with simple roots annihilates $f$ and then $f$ is diagonalizable.

Moreover, $A$ is an eigenvector associated to $0$ iff $A=A^t$ iff $A\in S_2(\Bbb R)$ which has the dimension $\frac{2\times 3}{2}=3$ and $A$ is an eigenvector associated to $2$ iff $A=-A^t$ iff $A\in AS_2(\Bbb R)$ which has the dimension $1$ so the matrix of $f$ relative to a basis of

$$\mathcal M_2(\Bbb R)=S_2(\Bbb R)\oplus AS_2(\Bbb R)$$ is $$\operatorname{diag}(0,0,0,2)$$