Orthogonal symmetric matrix in different bases

109 Views Asked by At

Given the orthogonal symmetric matrix $$ A =\frac{1}{3}\begin{pmatrix} 1&2&2\\ 2&1&-2\\ 2&-2&1 \end{pmatrix} \ , $$ I ran into some confusion when trying to represent it with respect to the (orthogonal) basis $$ B = \left\{ \mathbf b_1 =\begin{pmatrix} 1\\2\\2 \end{pmatrix}, \ \mathbf b_2 = \begin{pmatrix} 2\\1\\-2 \end{pmatrix},\ \mathbf b_3 = \begin{pmatrix} 2\\-2\\1 \end{pmatrix} \right \} \ . $$ The choice of these particular basis vectors may seem peculiar given that they are the columns of the matrix $A$, but the purpose was to see what the linear map looked like under this particular basis. Obviously, in this case, an (orthogonal) eigenbasis would be preferred, but that didn't resolve the confusion that I had at first; hence this choice.

The core of my confusion arose due to the, (at first hand) strikingly surprising, similarity of the decomposition of the standard basis vectors with respect to the basis $B$ and vice versa: $$\mathbf e_1 = \mathbf b_1 + 2\mathbf b_2+2\mathbf b_3 \\ \vdots \\ \mathbf b_1 = \frac{1}{9}\mathbf e_1 + \frac{2}{9}\mathbf e_2+\frac{2}{9} \mathbf e_3 \\ \vdots \\ \ .$$ What added to my confusion was the fact that I mistakenly interpreted some matrices with respect to the standard basis. In other words, I didn't fully understand with respect to which basis I should view each entry in the matrices involved.

1

There are 1 best solutions below

0
On BEST ANSWER

To proceed efficiently, we introduce some notation (taken from Michael Stoll's Linear Algebra 1 syllabus) to clarify the confusion at hand.

For the linear map $A:\mathbb R^3 \to \mathbb R^3 $ we denote by $$ [A]^E_E =\frac{1}{3}\begin{pmatrix} 1&2&2\\ 2&1&-2\\ 2&-2&1 \end{pmatrix}\ , $$ its matrix representation with respect to the standard basis $E = \{ \mathbf e_1, \mathbf e_2, \mathbf e_3\}$.

In this particular matrix, the columns represent the image of the standard basis vectors under the map $A$ with each of the three row entries representing the coefficient of each standard basis vector in the column.

In other words, the first column represents $[A]^E_E \mathbf e_1 = \mathbf e_1 + 2\mathbf e_2 + 2\mathbf e_3.$

So, $[A]^E_E$ gives us the image of the standard basis $E$ in terms of the standard basis.

To find the matrix $[A]^B_B$ with respect to the basis $B$ we write it as a product of matrices $$ [A]^B_B =[\operatorname{id}]^E_B \cdot [A]^E_E \cdot [\operatorname{id}]^B_E\ ,$$ in which $[\operatorname{id}]^E_B$ and $[\operatorname{id}]^B_E$ denote the change of basis matrices from $E$ to $B$ and from $B$ to $E$ respectively.

Now, the columns of $[\operatorname{id}]^E_B$ represent the standard basis vectors with respect to the basis $B$, where each row entry represents the coefficient of the basis vectors of $B$. So the first column of $$ [\operatorname{id}]^E_B = \begin{pmatrix} 1&2&2\\ 2&1&-2\\ 2&-2&1 \end{pmatrix} $$ represents $\mathbf e_1 = \mathbf b_1 + 2\mathbf b_2+2\mathbf b_3 $.

By exploiting the orthogonality of $[A]^E_E$ we quickly find $ [\operatorname{id}]^B_E = \frac{1}{9}[\operatorname{id}]^E_B $.

In $[\operatorname{id}]^B_E$, conversely, the first column represents $\mathbf b_1 = \frac{1}{9}\mathbf e_1 + \frac{2}{9}\mathbf e_2+\frac{2}{9} \mathbf e_3 $.

In the matrix representation of the linear map $A$ with respect to $B$ $$ [A]^B_B =\frac{1}{6}\begin{pmatrix} 1&2&2\\ 2&1&-2\\ 2&-2&1 \end{pmatrix}\ , $$ the columns represent the image of the basis $B$ in terms of B. So, the first column represents $[A]^B_B \mathbf b_1 = \frac{1}{6} \mathbf b_1 + \frac{2}{6}\mathbf b_2+\frac{2}{6}\mathbf b_3$.

Lastly, we consider the matrix representation of $A$ with respect to $E$ and $B$ in the row space and column space respectively $[A]^E_B$; and vice versa $[A]^B_E$.

For $[A]^E_B = \frac{1}{3}I_3$ and $[A]^B_E = 3I_3$, the first columns represent $$[A]^E_B \mathbf e_1 = \frac{1}{3} \mathbf b_1$$ and $$[A]^B_E \mathbf b_1 = 3 \mathbf e_1\ ,$$ respectively.

TL;DR: always be aware with respect to which bases the row and column spaces are taken.