Is there a better way of solving this problem other than guessing what the $2 \times 2$ matrices are?

204 Views Asked by At

EDIT I am just wondering whether, in this type of question, we would be allowed to assume that the ground field is algebraically closed, e.g., the complex numbers, so that we can look the Jordan form of matrices? Please feel free to comment on this, if you'd like.

The Jordan form was the key point of the problem, for anyone interested in solving the below problem.

This is an old exam question from linear algebra that I am working on,

The problem statement is:

Part 1

Can you find $2×2$ matrices $A$,$B$, and $C$, so that $A^2≠0$, $B^2≠0$ and $C^2≠0$

but $AB=0$,$BC=0$ and $CA=0$?

Part 2

Can you find $2×2$ matrices $A$,$B$, and $C$, so that $A^2≠0$, $B^2≠0$ and $C^2≠0$

but $AB=0$,$AC=0$ and $BC=0$?

So far, I have spent a lot of time on this problem by naively playing around with simple $2x2$ matrices, but no combination of $3$ has worked yet.

Every time I think I've found a combination that works, it turns out that the square of one matrix is unfortunately the zero matrix, which disqualifies it from being used.

What is the real intent of this problem? What concepts are being tested? What is a good strategy?

Any ideas are welcome.

Thanks,

2

There are 2 best solutions below

17
On BEST ANSWER

You can use the Jordan decomposition for, say, $A$. Then we may assume that $A$ is a diagonal matrix, or $$ A=\begin{pmatrix} a & 1 \cr 0 & a \end{pmatrix}, $$ with $a\neq 0$, because $A^2\neq 0$. Now it becomes an easy computation with $2\times 2$-matrices to find an example: $$ A=\begin{pmatrix} 1 & 0 \cr 0 & 0 \end{pmatrix},\; B=\begin{pmatrix} 0 & 0 \cr 1 & 1 \end{pmatrix},\; C=\begin{pmatrix} 0 & -1 \cr 0 & 1 \end{pmatrix}. $$ Of course, $A^2,B^2,C^2\neq 0$, but $AB=BC=CA=0$. There are many solutions, even with $A$ diagonal.

The assumption $A^2\neq 0$ means that $A$ is not nilpotent, i.e., the characteristic polynomial is not $t^2$. Because of Cayley-Hamilton we know that $A^2-tr(A)A+\det(A)I=0$, so that not both $\det(A)$ and $tr(A)$ can be zero.

Edit: For the second part: If $A=\begin{pmatrix} a & 1 \cr 0 & a \end{pmatrix}$, then $a\neq 0$ and the equations immediately give $B=0$, a contradiction. Now try the other case, where $A$ is diagonal. It gives immediately a contradiction.

0
On

Can I assume that the field is algebraically closed?

Yes. A zero (resp. nonzero) matrix in a base field is still a zero (resp. nonzero) matrix in any extension field. So, in both parts 1 and 2, you may assume without loss of generality that the underlying field is algebraically closed.

Can I use Jordan form, without assuming that the field is algebraically closed?

Yes. Since $A$ is $2\times2$ with $A^2,B^2\ne0$ and $AB=0$, the matrix $A$ must be nonzero but singular. It follows from Cayley-Hamilton theorem that $A^2=\operatorname{tr}(A)A$ with $\operatorname{tr}(A)\ne0$. Therefore $x(x-\operatorname{tr}(A))$ --- a product of two distinct linear factors --- is an annihilating polynomial of $A$. In turn, $A$ is diagonalisable over the given base field. So you don't need any field extension to diagonalise $A$.

Must I use Jordan form (or diagonalisation) to solve the problems?

No. Since $A$ is $2\times2$ with $A^2,B^2\ne0$ and $AB=0$, the matrix $A$ must have rank $1$. Similarly for $B$ and $C$. Let $A=x_1y_1^T,\ B=x_2y_2^T$ and $C=x_3y_3^T$. Then part 1 is equivalent to solving \begin{cases} y_1^Tx_1,\ y_2^Tx_2,\ y_3^Tx_3\ne0,\\ y_1^T x_2 = y_2^T x_3 = y_3^T x_1 = 0 \end{cases} for some nonzero vectors $x_1,x_2,x_3,y_1,y_2,y_3$. By a change of basis, you can further assume that $x_1=(1,0)^T$. Now the problem can be solved easily by simple inspection. In fact, under the assumption that $x_1=(1,0)^T$, it can be shown that $A,B,C$ must be, up to nonzero scalar multiples, in the form of \begin{align*} A&=\pmatrix{1\\ 0}\pmatrix{1&a}=\pmatrix{1&a\\ 0&0},\\ B&=\pmatrix{-a\\ 1}\pmatrix{1&-b}=\pmatrix{-a&ab\\ 1&-b},\\ C&=\pmatrix{b\\ 1}\pmatrix{0&1}=\pmatrix{0&b\\ 0&1} \end{align*} for some $a,b$ such that $b\ne-a$.

Using the same technique, we see that part 2 is equivalent to solving \begin{cases} y_1^Tx_1,\ y_2^Tx_2,\ y_3^Tx_3\ne0,\\ y_1^T x_2 = y_2^T x_3 = y_{\color{red}{1}}^T x_{\color{red}{3}} = 0. \end{cases} The equations $y_1^T x_2 = y_1^T x_3 = 0$ imply that $x_2,x_3$ are linearly dependent. In turn, the equations $y_1^T x_2 = y_2^T x_3 = 0$ imply that $y_1,y_2$ are linearly dependent. But then the requirements $y_2^Tx_2\ne0$ and $y_1^Tx_2=0$ are contradictory to each other. Therefore $A,B,C$ have no solutions.