Computing $[T]_\beta$ notation

1.1k Views Asked by At

I am having what I think is a notational issue with the following problem:

For each of the following linear operators $T$ on a vector space $V$ and ordered bases $\beta$, compute $[T]_\beta$, and determine whether $\beta$ is a basis consisting of eigenvectors of $T$.

$V = \mathbb{R}^2, T$$\begin{pmatrix}a\\b\\\end{pmatrix}$ = $\begin{pmatrix}10a-6b \\\ 17a-10b\end{pmatrix}$, and $\beta$ = {$\begin{pmatrix}1\\2\end{pmatrix}$ $\begin{pmatrix}2\\3\end{pmatrix}$}.

I feel like a total idiot that I'm stuck on this because I know how to compute the change of basis matrix but I'm getting caught up on notation.

How on earth do I do this?

1

There are 1 best solutions below

0
On BEST ANSWER

Crash course! Let $V$ and $W$ be finite dimensional vector spaces, and $\mathcal{B} = (v_1,\ldots,v_n)$, $\mathcal{C} = (w_1,\ldots,w_m)$ be bases for $V$ and $W$.

If $v \in V$, we can write $v= \sum_{i=1}^n \lambda_i v_i$ for certain $\lambda_1,\ldots, \lambda_n \in \Bbb R$. We put $$[v]_{\mathcal{B}} = \begin{pmatrix} \lambda_1 \\ \vdots \\ \lambda_n\end{pmatrix},$$and call this column vector the "coordinates of $v$ in the basis $\mathcal{B}$".

If $T\colon V \to W$ is linear, then $[T]_{\mathcal{B},\mathcal{C}} = (a_{ij})$ is the $m \times n$ matrix such that $$T(v_j) = \sum_{i=1}^m a_{ij}w_i, \quad 1 \leq i \leq n.$$Meaning "compute $T$ at elements of $\mathcal{B}$, write the results as combinations of $\mathcal{C}$, and put it in columns": $$[T]_{\mathcal{B},\mathcal{C}} = \begin{pmatrix} | & & | \\ [T(v_1)]_{\mathcal{C}} & \cdots & [T(v_n)]_{\mathcal{C}} \\ | & & | \end{pmatrix}.$$This matrix verifies $[T(v)]_{\mathcal{C}} = [T]_{\mathcal{B},\mathcal{C}}[v]_{\mathcal{B}}$ for all $v \in V$. That is, you can transfer all computations regarding $T$ to these matrices and column-matrices. Also, $[T]_{\mathcal{B}}$ is shorthand for $[T]_{\mathcal{B},\mathcal{B}}$, since you do not want to write $\mathcal{B}$ twice.

Lastly, you can understand changes of basis very easily with this notation. If $\mathcal{B}'$ is another basis for $V$, then you can look at the particular linear map ${\rm Id}_V\colon V \to V$. The change of basis is done by the matrices $[{\rm Id}_V]_{\mathcal{B},\mathcal{B}'}$ and $[{\rm Id}_V]_{\mathcal{B}',\mathcal{B}} = [{\rm Id}_V]_{\mathcal{B},\mathcal{B}'}^{-1}$, which are not the identity matrix unless $\mathcal{B} = \mathcal{B}'$!


In your exercise, we compute $$T(1,2) = (-2,-3) = 0\cdot (1,2) + (-1)\cdot (2,3),$$which says that $$[T]_\beta = \begin{pmatrix} 0 & \ast \\ -1 & \ast \end{pmatrix}$$To get the rest of that matrix we compute $$T(2,3) = (2,4) = 2\cdot (1,2) + 0\cdot (2,3),$$so that $$[T]_\beta = \begin{pmatrix} 0 & 2 \\ -1 & 0 \end{pmatrix}.$$The vectors in $\beta$ are not eigenvectors because $[T]_\beta$ is not diagonal.