Given the result of diagonalization of a matrix, determine the two invertible matrices.

62 Views Asked by At

Determine $3$ by $3$ invertible matrices P and Q, such that $$P\left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right)Q = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{array} \right). $$ I tried to diagonalize the first matrix but the result diagonalized matrix I got is $$\left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & -\sqrt{2} & 0 \\ 0 & 0 & \sqrt{2} \end{array} \right). $$ The signal doesn't seem to be right...

2

There are 2 best solutions below

0
On BEST ANSWER

As it stands, the question does not require $P$ and $Q$ to be inverses of each other, or real orthogonal, etc.. So, there is no need to diagonalise the matrix on the left by similarity or by congruence. In fact, since the matrix on the LHS has a zero diagonal and the one on the RHS is positive semidefinite, they are neither similar nor congruent to each other. Thus there does not exist any pair of invertible real matrices $P$ and $Q$ that solves the equation with $P=Q^{-1}$ or $P=Q^T$.

The equation is satisfied if and only if $$ \pmatrix{1&0\\ 0&1\\ 1&0}\pmatrix{0&1&0\\ 1&0&1} =\pmatrix{0&1&0\\ 1&0&1\\ 0&1&0} =P^{-1}\pmatrix{1\\ &1\\ &&0}Q^{-1} =P^{-1}\pmatrix{1&0\\ 0&1\\ 0&0}\pmatrix{1&0&0\\ 0&1&0}Q^{-1}. $$ So, it suffices to pick $P$ and $Q$ such that $$ P^{-1}=\pmatrix{1&0&\ast\\ 0&1&\ast\\ 1&0&\ast}, \ Q^{-1}=\pmatrix{0&1&0\\ 1&0&1\\ \ast&\ast&\ast}. $$ For instance, we may set $$ P^{-1}=\pmatrix{1&0&1\\ 0&1&0\\ 1&0&0}, \ Q^{-1}=\pmatrix{0&1&0\\ 1&0&1\\ 1&0&0}, $$ so that $$ P=\pmatrix{0&0&1\\ 0&1&0\\ 1&0&-1}, \ Q=\pmatrix{0&0&1\\ 1&0&0\\ 0&1&-1}. $$ Alternatively, you may also continue your work. Suppose you have found two invertible matrices $P_1$ and $Q_1$ such that $$ P_1\pmatrix{0&1&0\\ 1&0&1\\ 0&1&0}Q_1=\pmatrix{0\\ &-\sqrt{2}\\ &&\sqrt{2}}. $$ Then $$ \pmatrix{0\\ &-\frac{1}{\sqrt{2}}\\ &&\frac{1}{\sqrt{2}}}P_1\pmatrix{0&1&0\\ 1&0&1\\ 0&1&0}Q_1=\pmatrix{0\\ &1\\ &&1}, $$ So, if we apply a further permutation to flip the first and the last diagonal entries on the RHS, we obtain $$ \pmatrix{0&0&1\\ 0&1&0\\ 1&0&0}\pmatrix{0\\ &-\frac{1}{\sqrt{2}}\\ &&\frac{1}{\sqrt{2}}}\pmatrix{0&1&0\\ 1&0&1\\ 0&1&0}Q_1\pmatrix{0&0&1\\ 0&1&0\\ 1&0&0}=\pmatrix{1\\ &1\\ &&0}. $$ Hence we may set $$ P=\pmatrix{0&0&1\\ 0&1&0\\ 1&0&0}\pmatrix{0\\ &-\frac{1}{\sqrt{2}}\\ &&\frac{1}{\sqrt{2}}}P_1, \quad Q=Q_1\pmatrix{0&0&1\\ 0&1&0\\ 1&0&0}. $$

0
On

Denoting the matrix on the left-hand side by $A$, let $T:\mathbb R^3\to\mathbb R^3$, $T:\mathbf x\mapsto A\mathbf x$. Your exercise is then equivalent to finding bases for the domain and codomain of $T$ such that the matrix of $T$ relative to those bases is $\operatorname{diag}(1,1,0)$. The matrices $P$ and $Q$ that you are being asked to find are the corresponding change-of-basis matrices.

As explained in this answer, the basic process is to find a basis for the kernel of $T$ and extend that to a basis of $\mathbb R^3$. One way to find this extension is to compute a basis for the row space of $A$—the row and null spaces of a matrix are complementary. Collecting these basis vectors into the columns of a matrix gives you $Q$. Recalling that the columns of a transformation matrix are the images of the basis vectors, it should be clear that the codomain basis must include all of the nonzero images of these domain basis vectors (in the correct order, of course). Complete this basis any way you like and collect these vectors into the matrix $P^{-1}$.