Linear algebra; orthogonal diagonalization

286 Views Asked by At

I am currently studying for my most important exam of the year, but there is something that I keep breaking my head about. Help would be greatly appreciated, because my books don't tell me how to find out which matrix I should use and especially why.

$A$ is a symmetric $3\times 3$ matrix with eigenvalues $-3$ and $6$. The eigenspace for the eigenvalue $-3$ is the equation $2x_1+x_2=2x_3$. Which of the following matrices can be used as the $P$ matrix in $A=PDP^T$ and why is that? I know the answer is $B$ or $D$ since $A$ and $C$ are not orthogonal.

Thanks in advance!

2

There are 2 best solutions below

1
On BEST ANSWER

Firstly, the correct answer is the matrix described in case (c): $$P = \begin{bmatrix} \frac{1}{\sqrt{2}} & -\frac{1}{3\sqrt 2} & \frac 2 3\\ 0 & \frac 4{3\sqrt 2} & \frac 13\\ \frac 1{\sqrt 2} & \frac 1{3\sqrt 2} & -\frac{2}{3} \end{bmatrix}.$$

You can easily verify that $P$ is the only orthogonal matrix from the given ones, since $P\cdot P^T = P^T \cdot P = I$.


I suppose that we have the eigenspace $V(-3) = \{ (x_1,x_2,x_3) \in \mathbb R^3: 2x_1 + x_2 = 2x_3\}$, which is equivalent to: $$V(-3)\begin{array}[t]{l} = \{(x_1, -2x_1 + 2x_3, x_3) \in \mathbb R^3: x_1, x_3 \in \mathbb R\}\\[2ex] =\{x_1\cdot (1,-2,0) + x_3\cdot (0,2,1):x_1,x_3 \in \mathbb R\}. \end{array}$$

That means $V(-3)= \langle\:(1,-2,0), (0,2,1)\:\rangle$.

Notice that every linear combination of the $2$ above vectors is an eigenvector that corresponds to the eigenvalue $\lambda = -3$. Taking advantage of this fact we have that $2$ columns out of $3$ of $P$ will be of the form: $$a\cdot \begin{bmatrix} 1 \\ -2 \\ 0 \end{bmatrix} + b\cdot \begin{bmatrix} 0 \\ 2 \\ 1\end{bmatrix} =\begin{bmatrix} a \\2\cdot (b-a)\\b\end{bmatrix}\quad a,b \in \mathbb R\tag{$\star$},$$ since the columns of $P$ contain eigenvectors, which correspond to the respective eigenvalues.

Now, it is easy to check which $2$ columns of the given matrices satisfy the $(\star)$ by plugging in different values for $a, b \in \mathbb R$.

For $(a, b) =\left(\frac{1}{\sqrt 2},\frac{1}{\sqrt 2}\right)$ we get the first column of matrix $P$ and for $(a,b) =\left(- \frac{1}{3\sqrt 2}, \frac{1}{3\sqrt 2}\right)$ we take the second column of matrix $P$.

3
On

You can obtain the matrix $P$ by using eigenvectors as columns (or rows, I'm prone to making these kind of errors), so check wether the column vectors are eigenvectors of $A$. Of course this assumes that $A$ is given which I now realize probably isn't the case.

$A$ and $B$ aren't orthogonal since the first row vector does not have norm $1$, also since the first and second column vector aren't orthogonal. $C$ and $D$ aren't orthogonal since the second and third column vectors aren't orthogonal. So none of them can be used as $P$. Notice however that in most of them, there are two column vectors that satisfy the equation of the $(-3)$-eigenspace, which you need since the eigenspace is $2$-dimensional. So they got that right at least.

Edit: So I got this wrong. There's a theorem which says that you can diagonalize any symmetric matrix with using an orthogonal matrix, so we can get one $P$ which is orthogonal but that does not mean that any $P$ needs to be orthogonal. But then I cannot require that $P$ consists of eigenvectors of $A$ so the solution starts falling apart.