how to check a matrix representation of a linear transform

220 Views Asked by At

Consider the linear transform of $V=\mathbb{R}^2 \rightarrow W=\mathbb{R}^3$ given by:

$$T(a,b) = (-a, a+b, a-b)$$

The basis of:

$$V = \{\ (2,1),\ (1,7)\ \}$$

and the basis of:

$$W = \{\ (1,0,0,),\ (0,1,0),\ (0,0,1)\ \}$$


Now applying transform function T to each vector in V basis:

$$T(2,1) = (-2,3,1)$$ $$T(1,7) = (-1,8,-6)$$

Since, W is the standard basis, the matrix representation of T is formed by concatenating as column vectors T(2,1) and T(1,7). Thus, T matrix is:

$$T = \begin{bmatrix} -2 & -1 \\ 3 & 8 \\ 1 & -6 \end{bmatrix} $$

Now for the part that I don't get, how to check that my T matrix actually works?

I'm wondering why I can't use my T matrix in place of T function and get the same result?? Am I correctly understanding the purpose of T matrix representation?

$$ \begin{bmatrix} -2 & -1 \\ 3 & 8 \\ 1 & -6 \end{bmatrix} \begin{bmatrix} -2 \\ -1 \end{bmatrix} \ne \begin{bmatrix} -2 \\ 3 \\ 1 \end{bmatrix} $$

why?

3

There are 3 best solutions below

4
On

Don't forget that, in $\mathbb{R}^2$, the basis that you are working with is $\bigl((2,1),(1,7)\bigr)$, not the standard one. Therefore, since the coordinates of $(-2,-1)$ in that basis are $-1$ and $0$, then, in order to check whether that matrix works, what you do is$$\begin{bmatrix}-2&-1\\3&8\\1&-6\end{bmatrix}.\begin{bmatrix}-1\\0\end{bmatrix}=\begin{bmatrix}2\\-3\\-1\end{bmatrix}.$$

0
On

$(-2, -1) = -1\cdot(2,1) + 0\cdot (1,7)$, meaning you should be multiplying your matrix $T$ with the column $\begin{bmatrix}-1\\0\end{bmatrix}$ to get the correct result.

Remember, the matrix $T$, representing the linear transformation $\mathcal T$ (these two things are not the same! The matrix represents, but is not equal to, the transformation!) is calculated for the specific basis $V$. The matrix is, by definition, such that if $x=\alpha_1 (2,1) + \alpha_2(1,7)$, then $$ \begin{bmatrix} -2 & -1 \\ 3 & 8 \\ 1 & -6 \end{bmatrix} \begin{bmatrix} \alpha_1 \\ \alpha_2 \end{bmatrix}= \begin{bmatrix} \beta_1 \\ \beta_2 \\ \beta_3 \end{bmatrix} $$ where $\mathcal T(x)=\beta_1(1,0,0) + \beta_2(0,1,0)+\beta_3(0,0,1)$.


Even more generally, if you have:

  • a linear transformation $\mathcal A:\mathbb R^m\to\mathbb R^n$ (in fact, you can have any $m$ dimensional vector space as the domain, and any $n$ dimensional vector space as the codomain)
  • $\{v_1,\dots, v_m\}$ is a basis for $\mathbb R^m$
  • $\{w_1,\dots w_n\}$ is a basis of $\mathbb R^n$
  • $x=\alpha_1v_1 + \cdots + \alpha_m v_m$ is an element of $\mathbb R^m$

then the matrix $A$, representing $\mathcal A$ in the appropriate bases, has the following property: $\mathcal A(x) = \beta_1w_1+\dots+\beta_nw_n$, where

$$A\begin{bmatrix}\alpha_1\\\vdots\\\alpha_m\end{bmatrix}=\begin{bmatrix}\beta_1\\\vdots\\\beta_n\end{bmatrix}$$

0
On

$$T: V \rightarrow W $$

$$basis\{V\} = \left\{v_1=\begin{bmatrix}1 \\ 0 \\ 0\end{bmatrix}, v_2=\begin{bmatrix}0 \\ 1 \\ 0\end{bmatrix}, v_3=\begin{bmatrix}0 \\ 0 \\ 1\end{bmatrix}\right\} $$

$$basis\{W\} = \left\{w_1=\begin{bmatrix}4 \\ 3 \end{bmatrix}, w_2=\begin{bmatrix}3 \\2 \end{bmatrix}\right\} $$

B = matrix formed by concatenating the column vectors of W basis

$$B=\begin{bmatrix}4 & 3 \\ 3 & 2\end{bmatrix}$$

$$ \begin{aligned} T(v_1) &= B\ col(1, T) \\ T(v_2) &= B\ col(2, T) \\ T(v_3) &= B\ col(3, T) \end{aligned} $$

$$ v_x = \begin{bmatrix} a\\ b\\ c \end{bmatrix}$$

$$T(v_x) = aT(v_1) +bT(v_2)+cT(v_3)$$

or:

$$T(a,b,c) = aT(v_1) +bT(v_2)+cT(v_3)$$