Matrix of $T$ over $P_2$

165 Views Asked by At

I'm having a bit of trouble following the logic my professor used to construct the matrix of a given transformation $T$ in class, and was wondering if anyone could share any further intuition or insight.

Given $P_2$, the set of polynomials of degree $\leq2$, define a linear transformation $T:P_2\rightarrow \mathbb{R}^3$ such that $T\big(p(x)\big) = \begin{pmatrix} p(0)\\p(1)\\p(2)\end{pmatrix}$.

The matrix of $T$ with respect to the basis $\mathcal{B}=\left\{1,x,x^2\right\}\\$

is given by $T(1)=\begin{pmatrix} 1\\1\\1\end{pmatrix}$, $T(x)=\begin{pmatrix} 0\\1\\2\end{pmatrix}$, and $T(x^2)=\begin{pmatrix} 0\\1\\4\end{pmatrix}$,

i.e. $\,A=\begin{pmatrix}1&0&0\\1&1&1\\1&2&4\end{pmatrix}$.

I get the whole business of the coefficients on this particular transformation being the "trivial relation" such that $c_1=c_m=0$, and that $ImT=\mathbb{R}^{3}$ such that $T$ is an isomorphism.

What I'm struggling with, quite frankly, is how he got the coefficients for $A$. The coefficients for $p(x)=1$ in particular are a bit counter-intuitive. Is the idea that, if we define $p(x)=1$, then no matter what, $p(0)=p(1)=p(2)=1$?


The notation for expressing the various matrices associated with an isomorphism is also a bit nebulous for me. My textbook defines a $\mathcal{B}$-coordinate transformation as

$T^{-1}\begin{bmatrix}c_1\\\vdots\\c_n\end{bmatrix}=c_1f_1+\cdots + c_nf_n$.

What would the inverse of the transformation $T\big(p(x)\big) = \begin{pmatrix} p(0)\\p(1)\\p(2)\end{pmatrix}$ be?

What would the matrix associated with the inverse be? Or would it be a set of three different polynomials (linear equations)?

For those wondering, we're using Bretscher's Linear Algebra with Applications, 5th edition. Thanks!

1

There are 1 best solutions below

1
On BEST ANSWER

To give you a much more complicated example. Let us consider $T(f(x)) = f'(x)$ where we consider $T:P_2 \rightarrow P_2$ and to be perverse let's use $\beta = \{ 1,x,x^2 \}$ as the domain basis, but $\gamma = \{ x^2,x,1 \}$ as the codomain basis. To understand $T$ I like to consider $f(x)=a+bx+cx^2$ and see what happens: $$ T(f(x)) = b+2cx $$ Then the coordinate vector of the image is easy enough to see: $$ [T(f(x))]_{\gamma}= [b+2cx]_{\gamma} = [0(x^2)+2c(x)+b(1)]_{\gamma} = [0,2c,b]^T. $$ Then, the matrix $[T]_{\beta,\gamma}$ is the matrix which when multiplied on $$[f(x)]_{\beta} = [a+bx+cx^2]_{\beta} = [a,b,c]^T$$ yields $[T(f(x))]_{\gamma}=[0,2c,b]^T$. A moments reflection yields: $$ [T]_{\beta, \gamma} = \left[ \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 2 \\ 0 & 1 & 0\end{array}\right]. $$ Alternatively, just use: $$ [T]_{\beta, \gamma} = [ [T(1)]_{\gamma} | [T(x)]_{\gamma}| [T(x^2)]_{\gamma} ] = [[0]_{\gamma}|[1]_{\gamma}|[2x]_{\gamma}] = \left[ \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 2 \\ 0 & 1 & 0\end{array}\right].$$ Some books also use $[T]_{\beta}^{\gamma}$ to reflect the differing roles the domain and codomain bases play especially in regard to coordinate change. Perhaps your text does that.